00:00:00.001 Started by upstream project "autotest-nightly" build number 4336 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3699 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.144 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.145 The recommended git tool is: git 00:00:00.145 using credential 00000000-0000-0000-0000-000000000002 00:00:00.147 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.188 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.254 > git --version # 'git version 2.39.2' 00:00:00.254 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.272 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.272 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.640 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.652 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.664 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.664 > git config core.sparsecheckout # timeout=10 00:00:08.675 > git read-tree -mu HEAD # timeout=10 00:00:08.695 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.716 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.717 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.860 [Pipeline] Start of Pipeline 00:00:08.876 [Pipeline] library 00:00:08.877 Loading library shm_lib@master 00:00:08.878 Library shm_lib@master is cached. Copying from home. 00:00:08.894 [Pipeline] node 00:00:08.907 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.908 [Pipeline] { 00:00:08.916 [Pipeline] catchError 00:00:08.918 [Pipeline] { 00:00:08.930 [Pipeline] wrap 00:00:08.938 [Pipeline] { 00:00:08.943 [Pipeline] stage 00:00:08.945 [Pipeline] { (Prologue) 00:00:08.958 [Pipeline] echo 00:00:08.959 Node: VM-host-SM38 00:00:08.963 [Pipeline] cleanWs 00:00:08.972 [WS-CLEANUP] Deleting project workspace... 00:00:08.972 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.979 [WS-CLEANUP] done 00:00:09.159 [Pipeline] setCustomBuildProperty 00:00:09.256 [Pipeline] httpRequest 00:00:09.763 [Pipeline] echo 00:00:09.765 Sorcerer 10.211.164.20 is alive 00:00:09.776 [Pipeline] retry 00:00:09.778 [Pipeline] { 00:00:09.792 [Pipeline] httpRequest 00:00:09.797 HttpMethod: GET 00:00:09.798 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.798 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.820 Response Code: HTTP/1.1 200 OK 00:00:09.820 Success: Status code 200 is in the accepted range: 200,404 00:00:09.821 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.629 [Pipeline] } 00:00:24.648 [Pipeline] // retry 00:00:24.656 [Pipeline] sh 00:00:24.944 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.963 [Pipeline] httpRequest 00:00:25.347 [Pipeline] echo 00:00:25.349 Sorcerer 10.211.164.20 is alive 00:00:25.358 [Pipeline] retry 00:00:25.360 [Pipeline] { 00:00:25.375 [Pipeline] httpRequest 00:00:25.380 HttpMethod: GET 00:00:25.381 URL: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:25.382 Sending request to url: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:25.397 Response Code: HTTP/1.1 200 OK 00:00:25.398 Success: Status code 200 is in the accepted range: 200,404 00:00:25.399 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:52.318 [Pipeline] } 00:00:52.336 [Pipeline] // retry 00:00:52.344 [Pipeline] sh 00:00:52.630 + tar --no-same-owner -xf spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:55.180 [Pipeline] sh 00:00:55.466 + git -C spdk log --oneline -n5 00:00:55.466 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:00:55.466 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:00:55.466 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:00:55.466 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:00:55.466 e56f1618f lib/ftl: Add explicit support for write unit sizes of base device 00:00:55.488 [Pipeline] writeFile 00:00:55.502 [Pipeline] sh 00:00:55.788 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.802 [Pipeline] sh 00:00:56.087 + cat autorun-spdk.conf 00:00:56.087 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.087 SPDK_TEST_NVME=1 00:00:56.087 SPDK_TEST_FTL=1 00:00:56.087 SPDK_TEST_ISAL=1 00:00:56.087 SPDK_RUN_ASAN=1 00:00:56.087 SPDK_RUN_UBSAN=1 00:00:56.087 SPDK_TEST_XNVME=1 00:00:56.087 SPDK_TEST_NVME_FDP=1 00:00:56.087 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.096 RUN_NIGHTLY=1 00:00:56.098 [Pipeline] } 00:00:56.115 [Pipeline] // stage 00:00:56.131 [Pipeline] stage 00:00:56.133 [Pipeline] { (Run VM) 00:00:56.147 [Pipeline] sh 00:00:56.439 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:56.439 + echo 'Start stage prepare_nvme.sh' 00:00:56.439 Start stage prepare_nvme.sh 00:00:56.439 + [[ -n 5 ]] 00:00:56.439 + disk_prefix=ex5 00:00:56.439 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:56.439 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:56.439 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:56.439 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.439 ++ SPDK_TEST_NVME=1 00:00:56.439 ++ SPDK_TEST_FTL=1 00:00:56.439 ++ SPDK_TEST_ISAL=1 00:00:56.439 ++ SPDK_RUN_ASAN=1 00:00:56.439 ++ SPDK_RUN_UBSAN=1 00:00:56.439 ++ SPDK_TEST_XNVME=1 00:00:56.439 ++ SPDK_TEST_NVME_FDP=1 00:00:56.439 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.439 ++ RUN_NIGHTLY=1 00:00:56.439 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:56.439 + nvme_files=() 00:00:56.439 + declare -A nvme_files 00:00:56.439 + backend_dir=/var/lib/libvirt/images/backends 00:00:56.439 + nvme_files['nvme.img']=5G 00:00:56.439 + nvme_files['nvme-cmb.img']=5G 00:00:56.439 + nvme_files['nvme-multi0.img']=4G 00:00:56.439 + nvme_files['nvme-multi1.img']=4G 00:00:56.439 + nvme_files['nvme-multi2.img']=4G 00:00:56.439 + nvme_files['nvme-openstack.img']=8G 00:00:56.439 + nvme_files['nvme-zns.img']=5G 00:00:56.439 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:56.439 + (( SPDK_TEST_FTL == 1 )) 00:00:56.439 + nvme_files["nvme-ftl.img"]=6G 00:00:56.439 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:56.439 + nvme_files["nvme-fdp.img"]=1G 00:00:56.439 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:56.439 + for nvme in "${!nvme_files[@]}" 00:00:56.439 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:56.703 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.703 + for nvme in "${!nvme_files[@]}" 00:00:56.703 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:00:57.644 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:57.644 + for nvme in "${!nvme_files[@]}" 00:00:57.644 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:57.644 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.644 + for nvme in "${!nvme_files[@]}" 00:00:57.644 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:57.644 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:57.644 + for nvme in "${!nvme_files[@]}" 00:00:57.644 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:57.644 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.644 + for nvme in "${!nvme_files[@]}" 00:00:57.644 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:57.904 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.904 + for nvme in "${!nvme_files[@]}" 00:00:57.904 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:58.476 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.476 + for nvme in "${!nvme_files[@]}" 00:00:58.476 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:00:58.476 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:58.476 + for nvme in "${!nvme_files[@]}" 00:00:58.476 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:59.418 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.418 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:59.418 + echo 'End stage prepare_nvme.sh' 00:00:59.418 End stage prepare_nvme.sh 00:00:59.431 [Pipeline] sh 00:00:59.716 + DISTRO=fedora39 00:00:59.716 + CPUS=10 00:00:59.716 + RAM=12288 00:00:59.716 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:59.716 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:59.716 00:00:59.716 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:59.716 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:59.716 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:59.716 HELP=0 00:00:59.716 DRY_RUN=0 00:00:59.716 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:00:59.716 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:59.716 NVME_AUTO_CREATE=0 00:00:59.716 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:00:59.716 NVME_CMB=,,,, 00:00:59.716 NVME_PMR=,,,, 00:00:59.716 NVME_ZNS=,,,, 00:00:59.716 NVME_MS=true,,,, 00:00:59.716 NVME_FDP=,,,on, 00:00:59.716 SPDK_VAGRANT_DISTRO=fedora39 00:00:59.716 SPDK_VAGRANT_VMCPU=10 00:00:59.716 SPDK_VAGRANT_VMRAM=12288 00:00:59.716 SPDK_VAGRANT_PROVIDER=libvirt 00:00:59.716 SPDK_VAGRANT_HTTP_PROXY= 00:00:59.716 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:59.716 SPDK_OPENSTACK_NETWORK=0 00:00:59.716 VAGRANT_PACKAGE_BOX=0 00:00:59.716 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:59.716 FORCE_DISTRO=true 00:00:59.716 VAGRANT_BOX_VERSION= 00:00:59.716 EXTRA_VAGRANTFILES= 00:00:59.716 NIC_MODEL=e1000 00:00:59.716 00:00:59.716 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:59.716 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:02.264 Bringing machine 'default' up with 'libvirt' provider... 00:01:02.526 ==> default: Creating image (snapshot of base box volume). 00:01:02.787 ==> default: Creating domain with the following settings... 00:01:02.787 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733391170_1a7dfc16e689220e645f 00:01:02.787 ==> default: -- Domain type: kvm 00:01:02.787 ==> default: -- Cpus: 10 00:01:02.787 ==> default: -- Feature: acpi 00:01:02.787 ==> default: -- Feature: apic 00:01:02.787 ==> default: -- Feature: pae 00:01:02.787 ==> default: -- Memory: 12288M 00:01:02.787 ==> default: -- Memory Backing: hugepages: 00:01:02.787 ==> default: -- Management MAC: 00:01:02.787 ==> default: -- Loader: 00:01:02.787 ==> default: -- Nvram: 00:01:02.787 ==> default: -- Base box: spdk/fedora39 00:01:02.787 ==> default: -- Storage pool: default 00:01:02.787 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733391170_1a7dfc16e689220e645f.img (20G) 00:01:02.787 ==> default: -- Volume Cache: default 00:01:02.787 ==> default: -- Kernel: 00:01:02.787 ==> default: -- Initrd: 00:01:02.787 ==> default: -- Graphics Type: vnc 00:01:02.787 ==> default: -- Graphics Port: -1 00:01:02.787 ==> default: -- Graphics IP: 127.0.0.1 00:01:02.787 ==> default: -- Graphics Password: Not defined 00:01:02.787 ==> default: -- Video Type: cirrus 00:01:02.787 ==> default: -- Video VRAM: 9216 00:01:02.787 ==> default: -- Sound Type: 00:01:02.787 ==> default: -- Keymap: en-us 00:01:02.787 ==> default: -- TPM Path: 00:01:02.787 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:02.787 ==> default: -- Command line args: 00:01:02.787 ==> default: -> value=-device, 00:01:02.787 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:02.787 ==> default: -> value=-drive, 00:01:02.787 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:02.787 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:02.788 ==> default: -> value=-drive, 00:01:02.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:02.788 ==> default: -> value=-drive, 00:01:02.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.788 ==> default: -> value=-drive, 00:01:02.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.788 ==> default: -> value=-drive, 00:01:02.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:02.788 ==> default: -> value=-drive, 00:01:02.788 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:02.788 ==> default: -> value=-device, 00:01:02.788 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.788 ==> default: Creating shared folders metadata... 00:01:02.788 ==> default: Starting domain. 00:01:04.705 ==> default: Waiting for domain to get an IP address... 00:01:22.827 ==> default: Waiting for SSH to become available... 00:01:22.827 ==> default: Configuring and enabling network interfaces... 00:01:27.038 default: SSH address: 192.168.121.12:22 00:01:27.039 default: SSH username: vagrant 00:01:27.039 default: SSH auth method: private key 00:01:28.426 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.480 ==> default: Mounting SSHFS shared folder... 00:01:38.737 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:38.737 ==> default: Checking Mount.. 00:01:40.123 ==> default: Folder Successfully Mounted! 00:01:40.123 00:01:40.123 SUCCESS! 00:01:40.123 00:01:40.123 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:40.123 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:40.123 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:40.123 00:01:40.134 [Pipeline] } 00:01:40.150 [Pipeline] // stage 00:01:40.159 [Pipeline] dir 00:01:40.160 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:40.162 [Pipeline] { 00:01:40.177 [Pipeline] catchError 00:01:40.179 [Pipeline] { 00:01:40.192 [Pipeline] sh 00:01:40.477 + vagrant ssh-config --host vagrant 00:01:40.477 + sed -ne '/^Host/,$p' 00:01:40.477 + tee ssh_conf 00:01:43.020 Host vagrant 00:01:43.020 HostName 192.168.121.12 00:01:43.020 User vagrant 00:01:43.020 Port 22 00:01:43.020 UserKnownHostsFile /dev/null 00:01:43.020 StrictHostKeyChecking no 00:01:43.020 PasswordAuthentication no 00:01:43.020 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:43.020 IdentitiesOnly yes 00:01:43.020 LogLevel FATAL 00:01:43.020 ForwardAgent yes 00:01:43.020 ForwardX11 yes 00:01:43.020 00:01:43.032 [Pipeline] withEnv 00:01:43.034 [Pipeline] { 00:01:43.046 [Pipeline] sh 00:01:43.327 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:43.327 source /etc/os-release 00:01:43.327 [[ -e /image.version ]] && img=$(< /image.version) 00:01:43.327 # Minimal, systemd-like check. 00:01:43.327 if [[ -e /.dockerenv ]]; then 00:01:43.327 # Clear garbage from the node'\''s name: 00:01:43.327 # agt-er_autotest_547-896 -> autotest_547-896 00:01:43.327 # $HOSTNAME is the actual container id 00:01:43.327 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:43.327 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:43.327 # We can assume this is a mount from a host where container is running, 00:01:43.327 # so fetch its hostname to easily identify the target swarm worker. 00:01:43.327 container="$(< /etc/hostname) ($agent)" 00:01:43.327 else 00:01:43.327 # Fallback 00:01:43.327 container=$agent 00:01:43.327 fi 00:01:43.327 fi 00:01:43.327 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:43.327 ' 00:01:43.603 [Pipeline] } 00:01:43.620 [Pipeline] // withEnv 00:01:43.628 [Pipeline] setCustomBuildProperty 00:01:43.642 [Pipeline] stage 00:01:43.645 [Pipeline] { (Tests) 00:01:43.663 [Pipeline] sh 00:01:43.947 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:44.221 [Pipeline] sh 00:01:44.503 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:44.782 [Pipeline] timeout 00:01:44.782 Timeout set to expire in 50 min 00:01:44.784 [Pipeline] { 00:01:44.800 [Pipeline] sh 00:01:45.085 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:45.657 HEAD is now at 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:01:45.673 [Pipeline] sh 00:01:45.958 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:46.236 [Pipeline] sh 00:01:46.519 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:46.855 [Pipeline] sh 00:01:47.170 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:47.170 ++ readlink -f spdk_repo 00:01:47.170 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:47.170 + [[ -n /home/vagrant/spdk_repo ]] 00:01:47.170 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:47.170 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:47.170 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:47.170 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:47.170 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:47.170 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:47.170 + cd /home/vagrant/spdk_repo 00:01:47.170 + source /etc/os-release 00:01:47.170 ++ NAME='Fedora Linux' 00:01:47.170 ++ VERSION='39 (Cloud Edition)' 00:01:47.170 ++ ID=fedora 00:01:47.170 ++ VERSION_ID=39 00:01:47.170 ++ VERSION_CODENAME= 00:01:47.170 ++ PLATFORM_ID=platform:f39 00:01:47.170 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:47.170 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:47.170 ++ LOGO=fedora-logo-icon 00:01:47.170 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:47.170 ++ HOME_URL=https://fedoraproject.org/ 00:01:47.170 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:47.170 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:47.170 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:47.170 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:47.170 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:47.170 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:47.170 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:47.170 ++ SUPPORT_END=2024-11-12 00:01:47.170 ++ VARIANT='Cloud Edition' 00:01:47.170 ++ VARIANT_ID=cloud 00:01:47.170 + uname -a 00:01:47.170 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:47.170 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:47.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:48.002 Hugepages 00:01:48.002 node hugesize free / total 00:01:48.002 node0 1048576kB 0 / 0 00:01:48.002 node0 2048kB 0 / 0 00:01:48.002 00:01:48.002 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:48.002 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:48.002 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:48.002 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:48.002 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:48.002 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:48.263 + rm -f /tmp/spdk-ld-path 00:01:48.263 + source autorun-spdk.conf 00:01:48.263 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.263 ++ SPDK_TEST_NVME=1 00:01:48.263 ++ SPDK_TEST_FTL=1 00:01:48.263 ++ SPDK_TEST_ISAL=1 00:01:48.263 ++ SPDK_RUN_ASAN=1 00:01:48.263 ++ SPDK_RUN_UBSAN=1 00:01:48.263 ++ SPDK_TEST_XNVME=1 00:01:48.263 ++ SPDK_TEST_NVME_FDP=1 00:01:48.263 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.263 ++ RUN_NIGHTLY=1 00:01:48.263 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:48.263 + [[ -n '' ]] 00:01:48.263 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:48.263 + for M in /var/spdk/build-*-manifest.txt 00:01:48.263 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:48.263 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.263 + for M in /var/spdk/build-*-manifest.txt 00:01:48.263 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:48.263 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.263 + for M in /var/spdk/build-*-manifest.txt 00:01:48.263 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:48.263 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.263 ++ uname 00:01:48.263 + [[ Linux == \L\i\n\u\x ]] 00:01:48.263 + sudo dmesg -T 00:01:48.263 + sudo dmesg --clear 00:01:48.263 + dmesg_pid=5030 00:01:48.263 + [[ Fedora Linux == FreeBSD ]] 00:01:48.263 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:48.263 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:48.263 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:48.263 + [[ -x /usr/src/fio-static/fio ]] 00:01:48.263 + sudo dmesg -Tw 00:01:48.263 + export FIO_BIN=/usr/src/fio-static/fio 00:01:48.263 + FIO_BIN=/usr/src/fio-static/fio 00:01:48.263 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:48.263 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:48.263 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:48.263 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:48.263 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:48.263 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:48.263 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:48.263 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:48.263 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.263 09:33:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:48.263 09:33:35 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.263 09:33:35 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:48.263 09:33:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:48.263 09:33:35 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.524 09:33:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:48.524 09:33:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:48.524 09:33:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:48.524 09:33:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:48.524 09:33:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:48.524 09:33:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:48.524 09:33:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.524 09:33:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.524 09:33:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.524 09:33:35 -- paths/export.sh@5 -- $ export PATH 00:01:48.524 09:33:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.524 09:33:35 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:48.524 09:33:35 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:48.524 09:33:35 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733391215.XXXXXX 00:01:48.524 09:33:35 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733391215.GwYOu9 00:01:48.524 09:33:35 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:48.524 09:33:35 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:48.524 09:33:35 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:48.524 09:33:35 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:48.524 09:33:35 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:48.524 09:33:35 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:48.524 09:33:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:48.524 09:33:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.524 09:33:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:48.524 09:33:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:48.524 09:33:35 -- pm/common@17 -- $ local monitor 00:01:48.524 09:33:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.524 09:33:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.524 09:33:35 -- pm/common@25 -- $ sleep 1 00:01:48.524 09:33:35 -- pm/common@21 -- $ date +%s 00:01:48.524 09:33:35 -- pm/common@21 -- $ date +%s 00:01:48.524 09:33:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733391215 00:01:48.524 09:33:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733391215 00:01:48.524 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733391215_collect-vmstat.pm.log 00:01:48.524 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733391215_collect-cpu-load.pm.log 00:01:49.469 09:33:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:49.469 09:33:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:49.469 09:33:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:49.469 09:33:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:49.469 09:33:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:49.469 Thu Dec 5 09:33:36 AM UTC 2024 00:01:49.469 09:33:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:49.469 v25.01-pre-296-g8d3947977 00:01:49.469 09:33:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:49.469 09:33:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:49.469 09:33:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:49.469 09:33:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:49.469 09:33:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.469 ************************************ 00:01:49.469 START TEST asan 00:01:49.469 ************************************ 00:01:49.469 using asan 00:01:49.469 09:33:36 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:49.469 00:01:49.469 real 0m0.000s 00:01:49.469 user 0m0.000s 00:01:49.469 sys 0m0.000s 00:01:49.469 09:33:36 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:49.469 ************************************ 00:01:49.469 END TEST asan 00:01:49.469 ************************************ 00:01:49.469 09:33:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:49.469 09:33:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:49.469 09:33:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:49.469 09:33:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:49.469 09:33:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:49.469 09:33:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.469 ************************************ 00:01:49.469 START TEST ubsan 00:01:49.469 ************************************ 00:01:49.469 using ubsan 00:01:49.469 09:33:37 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:49.469 00:01:49.469 real 0m0.000s 00:01:49.469 user 0m0.000s 00:01:49.469 sys 0m0.000s 00:01:49.469 ************************************ 00:01:49.469 END TEST ubsan 00:01:49.469 ************************************ 00:01:49.469 09:33:37 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:49.469 09:33:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:49.729 09:33:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:49.729 09:33:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:49.729 09:33:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:49.729 09:33:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:49.729 09:33:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:49.729 09:33:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:49.729 09:33:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:49.729 09:33:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:49.729 09:33:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:49.729 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:49.729 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:50.300 Using 'verbs' RDMA provider 00:02:03.481 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:13.475 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:13.736 Creating mk/config.mk...done. 00:02:13.736 Creating mk/cc.flags.mk...done. 00:02:13.736 Type 'make' to build. 00:02:13.736 09:34:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:13.736 09:34:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:13.736 09:34:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:13.736 09:34:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.736 ************************************ 00:02:13.736 START TEST make 00:02:13.736 ************************************ 00:02:13.736 09:34:01 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:13.996 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:13.996 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:13.996 meson setup builddir \ 00:02:13.996 -Dwith-libaio=enabled \ 00:02:13.996 -Dwith-liburing=enabled \ 00:02:13.996 -Dwith-libvfn=disabled \ 00:02:13.996 -Dwith-spdk=disabled \ 00:02:13.996 -Dexamples=false \ 00:02:13.996 -Dtests=false \ 00:02:13.996 -Dtools=false && \ 00:02:13.996 meson compile -C builddir && \ 00:02:13.996 cd -) 00:02:13.996 make[1]: Nothing to be done for 'all'. 00:02:15.897 The Meson build system 00:02:15.897 Version: 1.5.0 00:02:15.897 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:15.897 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:15.897 Build type: native build 00:02:15.897 Project name: xnvme 00:02:15.897 Project version: 0.7.5 00:02:15.897 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.897 C linker for the host machine: cc ld.bfd 2.40-14 00:02:15.897 Host machine cpu family: x86_64 00:02:15.897 Host machine cpu: x86_64 00:02:15.897 Message: host_machine.system: linux 00:02:15.897 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:15.897 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:15.897 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:15.897 Run-time dependency threads found: YES 00:02:15.897 Has header "setupapi.h" : NO 00:02:15.897 Has header "linux/blkzoned.h" : YES 00:02:15.897 Has header "linux/blkzoned.h" : YES (cached) 00:02:15.897 Has header "libaio.h" : YES 00:02:15.897 Library aio found: YES 00:02:15.897 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.897 Run-time dependency liburing found: YES 2.2 00:02:15.897 Dependency libvfn skipped: feature with-libvfn disabled 00:02:15.897 Found CMake: /usr/bin/cmake (3.27.7) 00:02:15.897 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:15.897 Subproject spdk : skipped: feature with-spdk disabled 00:02:15.897 Run-time dependency appleframeworks found: NO (tried framework) 00:02:15.897 Run-time dependency appleframeworks found: NO (tried framework) 00:02:15.897 Library rt found: YES 00:02:15.897 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:15.897 Configuring xnvme_config.h using configuration 00:02:15.897 Configuring xnvme.spec using configuration 00:02:15.897 Run-time dependency bash-completion found: YES 2.11 00:02:15.897 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:15.897 Program cp found: YES (/usr/bin/cp) 00:02:15.897 Build targets in project: 3 00:02:15.897 00:02:15.897 xnvme 0.7.5 00:02:15.897 00:02:15.897 Subprojects 00:02:15.897 spdk : NO Feature 'with-spdk' disabled 00:02:15.897 00:02:15.897 User defined options 00:02:15.897 examples : false 00:02:15.897 tests : false 00:02:15.897 tools : false 00:02:15.897 with-libaio : enabled 00:02:15.897 with-liburing: enabled 00:02:15.897 with-libvfn : disabled 00:02:15.897 with-spdk : disabled 00:02:15.897 00:02:15.897 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.468 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:16.468 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:16.468 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:16.468 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:16.468 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:16.468 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:16.468 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:16.468 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:16.468 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:16.468 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:16.468 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:16.468 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:16.468 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:16.729 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:16.729 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:16.729 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:16.729 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:16.729 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:16.729 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:16.729 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:16.729 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:16.729 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:16.729 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:16.729 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:16.729 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:16.729 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:16.729 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:16.729 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:16.729 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:16.729 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:16.729 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:16.729 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:16.729 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:16.729 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:16.729 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:16.729 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:16.729 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:16.729 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:16.729 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:16.729 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:16.729 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:16.729 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:16.729 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:16.729 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:16.729 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:16.729 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:16.729 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:16.729 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:16.729 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:16.729 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:16.729 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:16.729 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:16.990 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:16.990 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:16.990 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:16.990 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:16.990 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:16.990 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:16.990 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:16.990 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:16.990 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:16.990 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:16.990 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:16.990 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:16.990 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:16.990 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:16.990 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:16.990 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:16.990 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:16.990 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:16.990 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:16.990 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:17.252 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:17.252 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:17.514 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:17.514 [75/76] Linking static target lib/libxnvme.a 00:02:17.514 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:17.514 INFO: autodetecting backend as ninja 00:02:17.514 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:17.514 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:24.098 The Meson build system 00:02:24.098 Version: 1.5.0 00:02:24.098 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:24.098 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:24.098 Build type: native build 00:02:24.098 Program cat found: YES (/usr/bin/cat) 00:02:24.098 Project name: DPDK 00:02:24.098 Project version: 24.03.0 00:02:24.098 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:24.098 C linker for the host machine: cc ld.bfd 2.40-14 00:02:24.098 Host machine cpu family: x86_64 00:02:24.098 Host machine cpu: x86_64 00:02:24.098 Message: ## Building in Developer Mode ## 00:02:24.098 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:24.098 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:24.098 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:24.098 Program python3 found: YES (/usr/bin/python3) 00:02:24.098 Program cat found: YES (/usr/bin/cat) 00:02:24.098 Compiler for C supports arguments -march=native: YES 00:02:24.098 Checking for size of "void *" : 8 00:02:24.098 Checking for size of "void *" : 8 (cached) 00:02:24.098 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:24.098 Library m found: YES 00:02:24.098 Library numa found: YES 00:02:24.098 Has header "numaif.h" : YES 00:02:24.098 Library fdt found: NO 00:02:24.098 Library execinfo found: NO 00:02:24.098 Has header "execinfo.h" : YES 00:02:24.098 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:24.098 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:24.098 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:24.098 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:24.098 Run-time dependency openssl found: YES 3.1.1 00:02:24.098 Run-time dependency libpcap found: YES 1.10.4 00:02:24.098 Has header "pcap.h" with dependency libpcap: YES 00:02:24.098 Compiler for C supports arguments -Wcast-qual: YES 00:02:24.098 Compiler for C supports arguments -Wdeprecated: YES 00:02:24.098 Compiler for C supports arguments -Wformat: YES 00:02:24.098 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:24.098 Compiler for C supports arguments -Wformat-security: NO 00:02:24.098 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.098 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:24.098 Compiler for C supports arguments -Wnested-externs: YES 00:02:24.098 Compiler for C supports arguments -Wold-style-definition: YES 00:02:24.098 Compiler for C supports arguments -Wpointer-arith: YES 00:02:24.098 Compiler for C supports arguments -Wsign-compare: YES 00:02:24.098 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:24.098 Compiler for C supports arguments -Wundef: YES 00:02:24.098 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.098 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:24.099 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:24.099 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.099 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:24.099 Program objdump found: YES (/usr/bin/objdump) 00:02:24.099 Compiler for C supports arguments -mavx512f: YES 00:02:24.099 Checking if "AVX512 checking" compiles: YES 00:02:24.099 Fetching value of define "__SSE4_2__" : 1 00:02:24.099 Fetching value of define "__AES__" : 1 00:02:24.099 Fetching value of define "__AVX__" : 1 00:02:24.099 Fetching value of define "__AVX2__" : 1 00:02:24.099 Fetching value of define "__AVX512BW__" : 1 00:02:24.099 Fetching value of define "__AVX512CD__" : 1 00:02:24.099 Fetching value of define "__AVX512DQ__" : 1 00:02:24.099 Fetching value of define "__AVX512F__" : 1 00:02:24.099 Fetching value of define "__AVX512VL__" : 1 00:02:24.099 Fetching value of define "__PCLMUL__" : 1 00:02:24.099 Fetching value of define "__RDRND__" : 1 00:02:24.099 Fetching value of define "__RDSEED__" : 1 00:02:24.099 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:24.099 Fetching value of define "__znver1__" : (undefined) 00:02:24.099 Fetching value of define "__znver2__" : (undefined) 00:02:24.099 Fetching value of define "__znver3__" : (undefined) 00:02:24.099 Fetching value of define "__znver4__" : (undefined) 00:02:24.099 Library asan found: YES 00:02:24.099 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:24.099 Message: lib/log: Defining dependency "log" 00:02:24.099 Message: lib/kvargs: Defining dependency "kvargs" 00:02:24.099 Message: lib/telemetry: Defining dependency "telemetry" 00:02:24.099 Library rt found: YES 00:02:24.099 Checking for function "getentropy" : NO 00:02:24.099 Message: lib/eal: Defining dependency "eal" 00:02:24.099 Message: lib/ring: Defining dependency "ring" 00:02:24.099 Message: lib/rcu: Defining dependency "rcu" 00:02:24.099 Message: lib/mempool: Defining dependency "mempool" 00:02:24.099 Message: lib/mbuf: Defining dependency "mbuf" 00:02:24.099 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:24.099 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:24.099 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:24.099 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:24.099 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:24.099 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:24.099 Compiler for C supports arguments -mpclmul: YES 00:02:24.099 Compiler for C supports arguments -maes: YES 00:02:24.099 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:24.099 Compiler for C supports arguments -mavx512bw: YES 00:02:24.099 Compiler for C supports arguments -mavx512dq: YES 00:02:24.099 Compiler for C supports arguments -mavx512vl: YES 00:02:24.099 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:24.099 Compiler for C supports arguments -mavx2: YES 00:02:24.099 Compiler for C supports arguments -mavx: YES 00:02:24.099 Message: lib/net: Defining dependency "net" 00:02:24.099 Message: lib/meter: Defining dependency "meter" 00:02:24.099 Message: lib/ethdev: Defining dependency "ethdev" 00:02:24.099 Message: lib/pci: Defining dependency "pci" 00:02:24.099 Message: lib/cmdline: Defining dependency "cmdline" 00:02:24.099 Message: lib/hash: Defining dependency "hash" 00:02:24.099 Message: lib/timer: Defining dependency "timer" 00:02:24.099 Message: lib/compressdev: Defining dependency "compressdev" 00:02:24.099 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:24.099 Message: lib/dmadev: Defining dependency "dmadev" 00:02:24.099 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:24.099 Message: lib/power: Defining dependency "power" 00:02:24.099 Message: lib/reorder: Defining dependency "reorder" 00:02:24.099 Message: lib/security: Defining dependency "security" 00:02:24.099 Has header "linux/userfaultfd.h" : YES 00:02:24.099 Has header "linux/vduse.h" : YES 00:02:24.099 Message: lib/vhost: Defining dependency "vhost" 00:02:24.099 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:24.099 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:24.099 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:24.099 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:24.099 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:24.099 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:24.099 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:24.099 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:24.099 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:24.099 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:24.099 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:24.099 Configuring doxy-api-html.conf using configuration 00:02:24.099 Configuring doxy-api-man.conf using configuration 00:02:24.099 Program mandb found: YES (/usr/bin/mandb) 00:02:24.099 Program sphinx-build found: NO 00:02:24.099 Configuring rte_build_config.h using configuration 00:02:24.099 Message: 00:02:24.099 ================= 00:02:24.099 Applications Enabled 00:02:24.099 ================= 00:02:24.099 00:02:24.099 apps: 00:02:24.099 00:02:24.099 00:02:24.099 Message: 00:02:24.099 ================= 00:02:24.099 Libraries Enabled 00:02:24.099 ================= 00:02:24.099 00:02:24.099 libs: 00:02:24.099 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:24.099 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:24.099 cryptodev, dmadev, power, reorder, security, vhost, 00:02:24.099 00:02:24.099 Message: 00:02:24.099 =============== 00:02:24.099 Drivers Enabled 00:02:24.099 =============== 00:02:24.099 00:02:24.099 common: 00:02:24.099 00:02:24.099 bus: 00:02:24.099 pci, vdev, 00:02:24.099 mempool: 00:02:24.099 ring, 00:02:24.099 dma: 00:02:24.099 00:02:24.099 net: 00:02:24.099 00:02:24.099 crypto: 00:02:24.099 00:02:24.099 compress: 00:02:24.099 00:02:24.099 vdpa: 00:02:24.099 00:02:24.099 00:02:24.099 Message: 00:02:24.099 ================= 00:02:24.099 Content Skipped 00:02:24.099 ================= 00:02:24.099 00:02:24.099 apps: 00:02:24.099 dumpcap: explicitly disabled via build config 00:02:24.099 graph: explicitly disabled via build config 00:02:24.099 pdump: explicitly disabled via build config 00:02:24.099 proc-info: explicitly disabled via build config 00:02:24.099 test-acl: explicitly disabled via build config 00:02:24.099 test-bbdev: explicitly disabled via build config 00:02:24.100 test-cmdline: explicitly disabled via build config 00:02:24.100 test-compress-perf: explicitly disabled via build config 00:02:24.100 test-crypto-perf: explicitly disabled via build config 00:02:24.100 test-dma-perf: explicitly disabled via build config 00:02:24.100 test-eventdev: explicitly disabled via build config 00:02:24.100 test-fib: explicitly disabled via build config 00:02:24.100 test-flow-perf: explicitly disabled via build config 00:02:24.100 test-gpudev: explicitly disabled via build config 00:02:24.100 test-mldev: explicitly disabled via build config 00:02:24.100 test-pipeline: explicitly disabled via build config 00:02:24.100 test-pmd: explicitly disabled via build config 00:02:24.100 test-regex: explicitly disabled via build config 00:02:24.100 test-sad: explicitly disabled via build config 00:02:24.100 test-security-perf: explicitly disabled via build config 00:02:24.100 00:02:24.100 libs: 00:02:24.100 argparse: explicitly disabled via build config 00:02:24.100 metrics: explicitly disabled via build config 00:02:24.100 acl: explicitly disabled via build config 00:02:24.100 bbdev: explicitly disabled via build config 00:02:24.100 bitratestats: explicitly disabled via build config 00:02:24.100 bpf: explicitly disabled via build config 00:02:24.100 cfgfile: explicitly disabled via build config 00:02:24.100 distributor: explicitly disabled via build config 00:02:24.100 efd: explicitly disabled via build config 00:02:24.100 eventdev: explicitly disabled via build config 00:02:24.100 dispatcher: explicitly disabled via build config 00:02:24.100 gpudev: explicitly disabled via build config 00:02:24.100 gro: explicitly disabled via build config 00:02:24.100 gso: explicitly disabled via build config 00:02:24.100 ip_frag: explicitly disabled via build config 00:02:24.100 jobstats: explicitly disabled via build config 00:02:24.100 latencystats: explicitly disabled via build config 00:02:24.100 lpm: explicitly disabled via build config 00:02:24.100 member: explicitly disabled via build config 00:02:24.100 pcapng: explicitly disabled via build config 00:02:24.100 rawdev: explicitly disabled via build config 00:02:24.100 regexdev: explicitly disabled via build config 00:02:24.100 mldev: explicitly disabled via build config 00:02:24.100 rib: explicitly disabled via build config 00:02:24.100 sched: explicitly disabled via build config 00:02:24.100 stack: explicitly disabled via build config 00:02:24.100 ipsec: explicitly disabled via build config 00:02:24.100 pdcp: explicitly disabled via build config 00:02:24.100 fib: explicitly disabled via build config 00:02:24.100 port: explicitly disabled via build config 00:02:24.100 pdump: explicitly disabled via build config 00:02:24.100 table: explicitly disabled via build config 00:02:24.100 pipeline: explicitly disabled via build config 00:02:24.100 graph: explicitly disabled via build config 00:02:24.100 node: explicitly disabled via build config 00:02:24.100 00:02:24.100 drivers: 00:02:24.100 common/cpt: not in enabled drivers build config 00:02:24.100 common/dpaax: not in enabled drivers build config 00:02:24.100 common/iavf: not in enabled drivers build config 00:02:24.100 common/idpf: not in enabled drivers build config 00:02:24.100 common/ionic: not in enabled drivers build config 00:02:24.100 common/mvep: not in enabled drivers build config 00:02:24.100 common/octeontx: not in enabled drivers build config 00:02:24.100 bus/auxiliary: not in enabled drivers build config 00:02:24.100 bus/cdx: not in enabled drivers build config 00:02:24.100 bus/dpaa: not in enabled drivers build config 00:02:24.100 bus/fslmc: not in enabled drivers build config 00:02:24.100 bus/ifpga: not in enabled drivers build config 00:02:24.100 bus/platform: not in enabled drivers build config 00:02:24.100 bus/uacce: not in enabled drivers build config 00:02:24.100 bus/vmbus: not in enabled drivers build config 00:02:24.100 common/cnxk: not in enabled drivers build config 00:02:24.100 common/mlx5: not in enabled drivers build config 00:02:24.100 common/nfp: not in enabled drivers build config 00:02:24.100 common/nitrox: not in enabled drivers build config 00:02:24.100 common/qat: not in enabled drivers build config 00:02:24.100 common/sfc_efx: not in enabled drivers build config 00:02:24.100 mempool/bucket: not in enabled drivers build config 00:02:24.100 mempool/cnxk: not in enabled drivers build config 00:02:24.100 mempool/dpaa: not in enabled drivers build config 00:02:24.100 mempool/dpaa2: not in enabled drivers build config 00:02:24.100 mempool/octeontx: not in enabled drivers build config 00:02:24.100 mempool/stack: not in enabled drivers build config 00:02:24.100 dma/cnxk: not in enabled drivers build config 00:02:24.100 dma/dpaa: not in enabled drivers build config 00:02:24.100 dma/dpaa2: not in enabled drivers build config 00:02:24.100 dma/hisilicon: not in enabled drivers build config 00:02:24.100 dma/idxd: not in enabled drivers build config 00:02:24.100 dma/ioat: not in enabled drivers build config 00:02:24.100 dma/skeleton: not in enabled drivers build config 00:02:24.100 net/af_packet: not in enabled drivers build config 00:02:24.100 net/af_xdp: not in enabled drivers build config 00:02:24.100 net/ark: not in enabled drivers build config 00:02:24.100 net/atlantic: not in enabled drivers build config 00:02:24.100 net/avp: not in enabled drivers build config 00:02:24.100 net/axgbe: not in enabled drivers build config 00:02:24.100 net/bnx2x: not in enabled drivers build config 00:02:24.100 net/bnxt: not in enabled drivers build config 00:02:24.100 net/bonding: not in enabled drivers build config 00:02:24.100 net/cnxk: not in enabled drivers build config 00:02:24.100 net/cpfl: not in enabled drivers build config 00:02:24.100 net/cxgbe: not in enabled drivers build config 00:02:24.100 net/dpaa: not in enabled drivers build config 00:02:24.100 net/dpaa2: not in enabled drivers build config 00:02:24.100 net/e1000: not in enabled drivers build config 00:02:24.100 net/ena: not in enabled drivers build config 00:02:24.100 net/enetc: not in enabled drivers build config 00:02:24.100 net/enetfec: not in enabled drivers build config 00:02:24.100 net/enic: not in enabled drivers build config 00:02:24.100 net/failsafe: not in enabled drivers build config 00:02:24.100 net/fm10k: not in enabled drivers build config 00:02:24.100 net/gve: not in enabled drivers build config 00:02:24.100 net/hinic: not in enabled drivers build config 00:02:24.100 net/hns3: not in enabled drivers build config 00:02:24.100 net/i40e: not in enabled drivers build config 00:02:24.100 net/iavf: not in enabled drivers build config 00:02:24.100 net/ice: not in enabled drivers build config 00:02:24.100 net/idpf: not in enabled drivers build config 00:02:24.100 net/igc: not in enabled drivers build config 00:02:24.100 net/ionic: not in enabled drivers build config 00:02:24.100 net/ipn3ke: not in enabled drivers build config 00:02:24.100 net/ixgbe: not in enabled drivers build config 00:02:24.100 net/mana: not in enabled drivers build config 00:02:24.100 net/memif: not in enabled drivers build config 00:02:24.100 net/mlx4: not in enabled drivers build config 00:02:24.100 net/mlx5: not in enabled drivers build config 00:02:24.100 net/mvneta: not in enabled drivers build config 00:02:24.100 net/mvpp2: not in enabled drivers build config 00:02:24.100 net/netvsc: not in enabled drivers build config 00:02:24.100 net/nfb: not in enabled drivers build config 00:02:24.100 net/nfp: not in enabled drivers build config 00:02:24.100 net/ngbe: not in enabled drivers build config 00:02:24.100 net/null: not in enabled drivers build config 00:02:24.100 net/octeontx: not in enabled drivers build config 00:02:24.100 net/octeon_ep: not in enabled drivers build config 00:02:24.100 net/pcap: not in enabled drivers build config 00:02:24.100 net/pfe: not in enabled drivers build config 00:02:24.100 net/qede: not in enabled drivers build config 00:02:24.100 net/ring: not in enabled drivers build config 00:02:24.100 net/sfc: not in enabled drivers build config 00:02:24.100 net/softnic: not in enabled drivers build config 00:02:24.100 net/tap: not in enabled drivers build config 00:02:24.100 net/thunderx: not in enabled drivers build config 00:02:24.100 net/txgbe: not in enabled drivers build config 00:02:24.100 net/vdev_netvsc: not in enabled drivers build config 00:02:24.100 net/vhost: not in enabled drivers build config 00:02:24.100 net/virtio: not in enabled drivers build config 00:02:24.100 net/vmxnet3: not in enabled drivers build config 00:02:24.100 raw/*: missing internal dependency, "rawdev" 00:02:24.100 crypto/armv8: not in enabled drivers build config 00:02:24.100 crypto/bcmfs: not in enabled drivers build config 00:02:24.100 crypto/caam_jr: not in enabled drivers build config 00:02:24.100 crypto/ccp: not in enabled drivers build config 00:02:24.100 crypto/cnxk: not in enabled drivers build config 00:02:24.100 crypto/dpaa_sec: not in enabled drivers build config 00:02:24.100 crypto/dpaa2_sec: not in enabled drivers build config 00:02:24.100 crypto/ipsec_mb: not in enabled drivers build config 00:02:24.100 crypto/mlx5: not in enabled drivers build config 00:02:24.100 crypto/mvsam: not in enabled drivers build config 00:02:24.100 crypto/nitrox: not in enabled drivers build config 00:02:24.100 crypto/null: not in enabled drivers build config 00:02:24.100 crypto/octeontx: not in enabled drivers build config 00:02:24.100 crypto/openssl: not in enabled drivers build config 00:02:24.100 crypto/scheduler: not in enabled drivers build config 00:02:24.100 crypto/uadk: not in enabled drivers build config 00:02:24.100 crypto/virtio: not in enabled drivers build config 00:02:24.100 compress/isal: not in enabled drivers build config 00:02:24.100 compress/mlx5: not in enabled drivers build config 00:02:24.100 compress/nitrox: not in enabled drivers build config 00:02:24.100 compress/octeontx: not in enabled drivers build config 00:02:24.100 compress/zlib: not in enabled drivers build config 00:02:24.100 regex/*: missing internal dependency, "regexdev" 00:02:24.100 ml/*: missing internal dependency, "mldev" 00:02:24.100 vdpa/ifc: not in enabled drivers build config 00:02:24.100 vdpa/mlx5: not in enabled drivers build config 00:02:24.100 vdpa/nfp: not in enabled drivers build config 00:02:24.100 vdpa/sfc: not in enabled drivers build config 00:02:24.101 event/*: missing internal dependency, "eventdev" 00:02:24.101 baseband/*: missing internal dependency, "bbdev" 00:02:24.101 gpu/*: missing internal dependency, "gpudev" 00:02:24.101 00:02:24.101 00:02:24.101 Build targets in project: 84 00:02:24.101 00:02:24.101 DPDK 24.03.0 00:02:24.101 00:02:24.101 User defined options 00:02:24.101 buildtype : debug 00:02:24.101 default_library : shared 00:02:24.101 libdir : lib 00:02:24.101 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:24.101 b_sanitize : address 00:02:24.101 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:24.101 c_link_args : 00:02:24.101 cpu_instruction_set: native 00:02:24.101 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:24.101 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:24.101 enable_docs : false 00:02:24.101 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:24.101 enable_kmods : false 00:02:24.101 max_lcores : 128 00:02:24.101 tests : false 00:02:24.101 00:02:24.101 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.359 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:24.619 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.619 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.619 [3/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.619 [4/267] Linking static target lib/librte_kvargs.a 00:02:24.619 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.619 [6/267] Linking static target lib/librte_log.a 00:02:24.878 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.878 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.878 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.878 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.878 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.878 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.878 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.878 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.878 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.878 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.878 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.137 [18/267] Linking static target lib/librte_telemetry.a 00:02:25.137 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.396 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.396 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.396 [22/267] Linking target lib/librte_log.so.24.1 00:02:25.396 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.396 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.396 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.396 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.396 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.396 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.396 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:25.653 [30/267] Linking target lib/librte_kvargs.so.24.1 00:02:25.653 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.653 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.653 [33/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:25.653 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.653 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.653 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.653 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.653 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.653 [39/267] Linking target lib/librte_telemetry.so.24.1 00:02:25.911 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.911 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.911 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.911 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.911 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:25.911 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.911 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.169 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:26.169 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.169 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.169 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.169 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.169 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.169 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.428 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.428 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.428 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.428 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.428 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.428 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.686 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.686 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.686 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.686 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.686 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.686 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.686 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.686 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.686 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.944 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.944 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:26.944 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.944 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.944 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:26.944 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.944 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.944 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:26.944 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.203 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.203 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.203 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.203 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.462 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.462 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.462 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.462 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.462 [86/267] Linking static target lib/librte_eal.a 00:02:27.462 [87/267] Linking static target lib/librte_ring.a 00:02:27.462 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.462 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.720 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.720 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.720 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.720 [93/267] Linking static target lib/librte_mempool.a 00:02:27.720 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.720 [95/267] Linking static target lib/librte_rcu.a 00:02:27.720 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.978 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.978 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.978 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.236 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.236 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.236 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.236 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.236 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.236 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:28.236 [106/267] Linking static target lib/librte_net.a 00:02:28.236 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.236 [108/267] Linking static target lib/librte_meter.a 00:02:28.494 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.494 [110/267] Linking static target lib/librte_mbuf.a 00:02:28.494 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.494 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.494 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.494 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.752 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.752 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.752 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.011 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.011 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:29.269 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:29.269 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:29.269 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.269 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.269 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:29.269 [125/267] Linking static target lib/librte_pci.a 00:02:29.269 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.269 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:29.269 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:29.528 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.528 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:29.528 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:29.528 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.528 [133/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.528 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.528 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.528 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:29.528 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.528 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.787 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.787 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.787 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:29.787 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:29.787 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:29.787 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:29.787 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:29.787 [146/267] Linking static target lib/librte_cmdline.a 00:02:30.046 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:30.046 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:30.046 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.046 [150/267] Linking static target lib/librte_timer.a 00:02:30.046 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.046 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.304 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.304 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.304 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.304 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.304 [157/267] Linking static target lib/librte_ethdev.a 00:02:30.304 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.563 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:30.563 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.563 [161/267] Linking static target lib/librte_hash.a 00:02:30.563 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.563 [163/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.563 [164/267] Linking static target lib/librte_compressdev.a 00:02:30.563 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.563 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.563 [167/267] Linking static target lib/librte_dmadev.a 00:02:30.821 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.821 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.821 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.821 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:31.080 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.080 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:31.080 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:31.339 [175/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:31.339 [176/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.339 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.339 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:31.339 [179/267] Linking static target lib/librte_cryptodev.a 00:02:31.339 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.339 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:31.339 [182/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.339 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.599 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:31.599 [185/267] Linking static target lib/librte_power.a 00:02:31.599 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.599 [187/267] Linking static target lib/librte_reorder.a 00:02:31.599 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.599 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.926 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.926 [191/267] Linking static target lib/librte_security.a 00:02:31.926 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.926 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.926 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.189 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.189 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.447 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:32.447 [198/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.447 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:32.447 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.706 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.706 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.706 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.706 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.706 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.964 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.964 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.964 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.964 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.964 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.964 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.964 [212/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.964 [213/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.964 [214/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.222 [215/267] Linking static target drivers/librte_bus_pci.a 00:02:33.222 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.222 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.222 [218/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:33.222 [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:33.222 [220/267] Linking static target drivers/librte_bus_vdev.a 00:02:33.222 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.222 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.222 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.222 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:33.222 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.480 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.047 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.613 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.872 [229/267] Linking target lib/librte_eal.so.24.1 00:02:34.872 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.872 [231/267] Linking target lib/librte_ring.so.24.1 00:02:34.872 [232/267] Linking target lib/librte_pci.so.24.1 00:02:34.872 [233/267] Linking target lib/librte_timer.so.24.1 00:02:34.872 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.872 [235/267] Linking target lib/librte_meter.so.24.1 00:02:34.872 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:35.130 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.130 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.130 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.130 [240/267] Linking target lib/librte_mempool.so.24.1 00:02:35.130 [241/267] Linking target lib/librte_rcu.so.24.1 00:02:35.130 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.130 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.130 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.130 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.130 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.130 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:35.130 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.388 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.388 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:35.388 [251/267] Linking target lib/librte_net.so.24.1 00:02:35.388 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:35.388 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:35.388 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.388 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.388 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:35.388 [257/267] Linking target lib/librte_hash.so.24.1 00:02:35.388 [258/267] Linking target lib/librte_security.so.24.1 00:02:35.647 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.647 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.647 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:35.905 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:35.905 [263/267] Linking target lib/librte_power.so.24.1 00:02:36.471 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.730 [265/267] Linking static target lib/librte_vhost.a 00:02:37.663 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.663 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:37.663 INFO: autodetecting backend as ninja 00:02:37.663 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:52.546 CC lib/ut_mock/mock.o 00:02:52.546 CC lib/ut/ut.o 00:02:52.546 CC lib/log/log.o 00:02:52.546 CC lib/log/log_flags.o 00:02:52.546 CC lib/log/log_deprecated.o 00:02:52.546 LIB libspdk_log.a 00:02:52.546 LIB libspdk_ut_mock.a 00:02:52.546 LIB libspdk_ut.a 00:02:52.546 SO libspdk_ut_mock.so.6.0 00:02:52.546 SO libspdk_log.so.7.1 00:02:52.546 SO libspdk_ut.so.2.0 00:02:52.546 SYMLINK libspdk_log.so 00:02:52.546 SYMLINK libspdk_ut_mock.so 00:02:52.546 SYMLINK libspdk_ut.so 00:02:52.546 CC lib/util/base64.o 00:02:52.546 CC lib/dma/dma.o 00:02:52.546 CC lib/util/bit_array.o 00:02:52.546 CC lib/util/cpuset.o 00:02:52.546 CC lib/util/crc16.o 00:02:52.546 CC lib/util/crc32.o 00:02:52.546 CC lib/util/crc32c.o 00:02:52.546 CXX lib/trace_parser/trace.o 00:02:52.546 CC lib/ioat/ioat.o 00:02:52.546 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.546 CC lib/util/crc32_ieee.o 00:02:52.546 CC lib/vfio_user/host/vfio_user.o 00:02:52.546 CC lib/util/crc64.o 00:02:52.546 CC lib/util/dif.o 00:02:52.546 CC lib/util/fd.o 00:02:52.546 LIB libspdk_dma.a 00:02:52.546 CC lib/util/fd_group.o 00:02:52.546 SO libspdk_dma.so.5.0 00:02:52.546 CC lib/util/file.o 00:02:52.546 LIB libspdk_ioat.a 00:02:52.546 CC lib/util/hexlify.o 00:02:52.546 SYMLINK libspdk_dma.so 00:02:52.546 SO libspdk_ioat.so.7.0 00:02:52.546 CC lib/util/iov.o 00:02:52.546 CC lib/util/math.o 00:02:52.546 CC lib/util/net.o 00:02:52.546 SYMLINK libspdk_ioat.so 00:02:52.546 CC lib/util/pipe.o 00:02:52.546 LIB libspdk_vfio_user.a 00:02:52.546 CC lib/util/strerror_tls.o 00:02:52.546 CC lib/util/string.o 00:02:52.546 SO libspdk_vfio_user.so.5.0 00:02:52.546 CC lib/util/uuid.o 00:02:52.546 SYMLINK libspdk_vfio_user.so 00:02:52.546 CC lib/util/xor.o 00:02:52.546 CC lib/util/zipf.o 00:02:52.546 CC lib/util/md5.o 00:02:52.808 LIB libspdk_util.a 00:02:52.808 SO libspdk_util.so.10.1 00:02:52.808 LIB libspdk_trace_parser.a 00:02:52.808 SO libspdk_trace_parser.so.6.0 00:02:53.069 SYMLINK libspdk_util.so 00:02:53.069 SYMLINK libspdk_trace_parser.so 00:02:53.069 CC lib/json/json_parse.o 00:02:53.069 CC lib/vmd/vmd.o 00:02:53.069 CC lib/vmd/led.o 00:02:53.069 CC lib/json/json_util.o 00:02:53.069 CC lib/json/json_write.o 00:02:53.069 CC lib/conf/conf.o 00:02:53.069 CC lib/env_dpdk/env.o 00:02:53.069 CC lib/env_dpdk/memory.o 00:02:53.069 CC lib/rdma_utils/rdma_utils.o 00:02:53.070 CC lib/idxd/idxd.o 00:02:53.070 CC lib/idxd/idxd_user.o 00:02:53.331 LIB libspdk_conf.a 00:02:53.331 CC lib/idxd/idxd_kernel.o 00:02:53.331 SO libspdk_conf.so.6.0 00:02:53.331 SYMLINK libspdk_conf.so 00:02:53.331 CC lib/env_dpdk/pci.o 00:02:53.331 CC lib/env_dpdk/init.o 00:02:53.331 LIB libspdk_rdma_utils.a 00:02:53.331 CC lib/env_dpdk/threads.o 00:02:53.331 SO libspdk_rdma_utils.so.1.0 00:02:53.331 LIB libspdk_json.a 00:02:53.331 CC lib/env_dpdk/pci_ioat.o 00:02:53.331 SO libspdk_json.so.6.0 00:02:53.331 SYMLINK libspdk_rdma_utils.so 00:02:53.331 SYMLINK libspdk_json.so 00:02:53.331 CC lib/env_dpdk/pci_virtio.o 00:02:53.331 CC lib/env_dpdk/pci_vmd.o 00:02:53.593 CC lib/env_dpdk/pci_idxd.o 00:02:53.593 CC lib/env_dpdk/pci_event.o 00:02:53.593 CC lib/rdma_provider/common.o 00:02:53.593 CC lib/env_dpdk/sigbus_handler.o 00:02:53.593 CC lib/env_dpdk/pci_dpdk.o 00:02:53.593 LIB libspdk_vmd.a 00:02:53.593 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:53.593 SO libspdk_vmd.so.6.0 00:02:53.593 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:53.593 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:53.593 SYMLINK libspdk_vmd.so 00:02:53.593 CC lib/jsonrpc/jsonrpc_server.o 00:02:53.593 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:53.593 CC lib/jsonrpc/jsonrpc_client.o 00:02:53.593 LIB libspdk_idxd.a 00:02:53.593 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:53.854 SO libspdk_idxd.so.12.1 00:02:53.854 SYMLINK libspdk_idxd.so 00:02:53.854 LIB libspdk_rdma_provider.a 00:02:53.854 SO libspdk_rdma_provider.so.7.0 00:02:53.854 SYMLINK libspdk_rdma_provider.so 00:02:53.854 LIB libspdk_jsonrpc.a 00:02:53.854 SO libspdk_jsonrpc.so.6.0 00:02:54.115 SYMLINK libspdk_jsonrpc.so 00:02:54.115 LIB libspdk_env_dpdk.a 00:02:54.377 CC lib/rpc/rpc.o 00:02:54.377 SO libspdk_env_dpdk.so.15.1 00:02:54.377 SYMLINK libspdk_env_dpdk.so 00:02:54.377 LIB libspdk_rpc.a 00:02:54.377 SO libspdk_rpc.so.6.0 00:02:54.637 SYMLINK libspdk_rpc.so 00:02:54.637 CC lib/notify/notify.o 00:02:54.637 CC lib/keyring/keyring.o 00:02:54.637 CC lib/notify/notify_rpc.o 00:02:54.637 CC lib/keyring/keyring_rpc.o 00:02:54.637 CC lib/trace/trace_flags.o 00:02:54.637 CC lib/trace/trace.o 00:02:54.637 CC lib/trace/trace_rpc.o 00:02:54.896 LIB libspdk_notify.a 00:02:54.896 LIB libspdk_keyring.a 00:02:54.896 SO libspdk_notify.so.6.0 00:02:54.896 SO libspdk_keyring.so.2.0 00:02:54.896 LIB libspdk_trace.a 00:02:54.896 SYMLINK libspdk_notify.so 00:02:54.896 SYMLINK libspdk_keyring.so 00:02:54.896 SO libspdk_trace.so.11.0 00:02:55.155 SYMLINK libspdk_trace.so 00:02:55.155 CC lib/sock/sock.o 00:02:55.155 CC lib/sock/sock_rpc.o 00:02:55.155 CC lib/thread/thread.o 00:02:55.414 CC lib/thread/iobuf.o 00:02:55.675 LIB libspdk_sock.a 00:02:55.675 SO libspdk_sock.so.10.0 00:02:55.675 SYMLINK libspdk_sock.so 00:02:55.935 CC lib/nvme/nvme_fabric.o 00:02:55.935 CC lib/nvme/nvme_ns_cmd.o 00:02:55.935 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:55.935 CC lib/nvme/nvme_ctrlr.o 00:02:55.935 CC lib/nvme/nvme_ns.o 00:02:55.935 CC lib/nvme/nvme_pcie_common.o 00:02:55.935 CC lib/nvme/nvme.o 00:02:55.935 CC lib/nvme/nvme_pcie.o 00:02:55.935 CC lib/nvme/nvme_qpair.o 00:02:56.507 CC lib/nvme/nvme_quirks.o 00:02:56.507 CC lib/nvme/nvme_transport.o 00:02:56.507 CC lib/nvme/nvme_discovery.o 00:02:56.507 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:56.507 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:56.767 CC lib/nvme/nvme_tcp.o 00:02:56.767 CC lib/nvme/nvme_opal.o 00:02:56.767 CC lib/nvme/nvme_io_msg.o 00:02:56.767 CC lib/nvme/nvme_poll_group.o 00:02:56.767 LIB libspdk_thread.a 00:02:56.767 SO libspdk_thread.so.11.0 00:02:57.025 SYMLINK libspdk_thread.so 00:02:57.025 CC lib/nvme/nvme_zns.o 00:02:57.025 CC lib/nvme/nvme_stubs.o 00:02:57.025 CC lib/nvme/nvme_auth.o 00:02:57.025 CC lib/accel/accel.o 00:02:57.285 CC lib/blob/blobstore.o 00:02:57.285 CC lib/blob/request.o 00:02:57.285 CC lib/nvme/nvme_cuse.o 00:02:57.285 CC lib/accel/accel_rpc.o 00:02:57.285 CC lib/accel/accel_sw.o 00:02:57.544 CC lib/nvme/nvme_rdma.o 00:02:57.544 CC lib/blob/zeroes.o 00:02:57.544 CC lib/blob/blob_bs_dev.o 00:02:57.801 CC lib/init/json_config.o 00:02:57.801 CC lib/init/subsystem.o 00:02:57.801 CC lib/virtio/virtio.o 00:02:57.801 CC lib/fsdev/fsdev.o 00:02:57.801 CC lib/fsdev/fsdev_io.o 00:02:57.801 CC lib/fsdev/fsdev_rpc.o 00:02:58.059 CC lib/init/subsystem_rpc.o 00:02:58.059 CC lib/init/rpc.o 00:02:58.059 CC lib/virtio/virtio_vhost_user.o 00:02:58.059 CC lib/virtio/virtio_vfio_user.o 00:02:58.059 CC lib/virtio/virtio_pci.o 00:02:58.059 LIB libspdk_init.a 00:02:58.059 SO libspdk_init.so.6.0 00:02:58.316 LIB libspdk_accel.a 00:02:58.316 SYMLINK libspdk_init.so 00:02:58.316 SO libspdk_accel.so.16.0 00:02:58.316 SYMLINK libspdk_accel.so 00:02:58.316 LIB libspdk_fsdev.a 00:02:58.316 LIB libspdk_virtio.a 00:02:58.316 CC lib/event/app.o 00:02:58.316 CC lib/event/log_rpc.o 00:02:58.316 CC lib/event/reactor.o 00:02:58.316 CC lib/event/scheduler_static.o 00:02:58.316 CC lib/event/app_rpc.o 00:02:58.316 SO libspdk_fsdev.so.2.0 00:02:58.316 SO libspdk_virtio.so.7.0 00:02:58.573 CC lib/bdev/bdev.o 00:02:58.573 SYMLINK libspdk_fsdev.so 00:02:58.573 CC lib/bdev/bdev_rpc.o 00:02:58.573 SYMLINK libspdk_virtio.so 00:02:58.573 CC lib/bdev/bdev_zone.o 00:02:58.573 CC lib/bdev/part.o 00:02:58.573 CC lib/bdev/scsi_nvme.o 00:02:58.573 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:58.832 LIB libspdk_event.a 00:02:58.832 SO libspdk_event.so.14.0 00:02:58.832 SYMLINK libspdk_event.so 00:02:58.832 LIB libspdk_nvme.a 00:02:59.092 SO libspdk_nvme.so.15.0 00:02:59.354 LIB libspdk_fuse_dispatcher.a 00:02:59.354 SYMLINK libspdk_nvme.so 00:02:59.354 SO libspdk_fuse_dispatcher.so.1.0 00:02:59.354 SYMLINK libspdk_fuse_dispatcher.so 00:03:00.337 LIB libspdk_blob.a 00:03:00.619 SO libspdk_blob.so.12.0 00:03:00.619 SYMLINK libspdk_blob.so 00:03:00.877 CC lib/blobfs/tree.o 00:03:00.877 CC lib/blobfs/blobfs.o 00:03:00.877 CC lib/lvol/lvol.o 00:03:00.877 LIB libspdk_bdev.a 00:03:00.877 SO libspdk_bdev.so.17.0 00:03:00.877 SYMLINK libspdk_bdev.so 00:03:01.134 CC lib/ftl/ftl_core.o 00:03:01.134 CC lib/ftl/ftl_init.o 00:03:01.134 CC lib/ftl/ftl_layout.o 00:03:01.134 CC lib/ftl/ftl_debug.o 00:03:01.134 CC lib/nvmf/ctrlr.o 00:03:01.134 CC lib/scsi/dev.o 00:03:01.134 CC lib/nbd/nbd.o 00:03:01.134 CC lib/ublk/ublk.o 00:03:01.392 CC lib/ftl/ftl_io.o 00:03:01.392 CC lib/nvmf/ctrlr_discovery.o 00:03:01.392 CC lib/scsi/lun.o 00:03:01.392 CC lib/scsi/port.o 00:03:01.392 CC lib/ublk/ublk_rpc.o 00:03:01.392 LIB libspdk_lvol.a 00:03:01.392 CC lib/nbd/nbd_rpc.o 00:03:01.392 LIB libspdk_blobfs.a 00:03:01.392 CC lib/nvmf/ctrlr_bdev.o 00:03:01.650 SO libspdk_lvol.so.11.0 00:03:01.650 CC lib/ftl/ftl_sb.o 00:03:01.650 SO libspdk_blobfs.so.11.0 00:03:01.650 SYMLINK libspdk_lvol.so 00:03:01.650 CC lib/ftl/ftl_l2p.o 00:03:01.650 CC lib/ftl/ftl_l2p_flat.o 00:03:01.650 SYMLINK libspdk_blobfs.so 00:03:01.650 CC lib/ftl/ftl_nv_cache.o 00:03:01.650 CC lib/scsi/scsi.o 00:03:01.650 LIB libspdk_ublk.a 00:03:01.650 LIB libspdk_nbd.a 00:03:01.650 SO libspdk_ublk.so.3.0 00:03:01.650 SO libspdk_nbd.so.7.0 00:03:01.650 SYMLINK libspdk_ublk.so 00:03:01.650 CC lib/ftl/ftl_band.o 00:03:01.650 CC lib/ftl/ftl_band_ops.o 00:03:01.650 SYMLINK libspdk_nbd.so 00:03:01.650 CC lib/ftl/ftl_writer.o 00:03:01.650 CC lib/scsi/scsi_bdev.o 00:03:01.650 CC lib/scsi/scsi_pr.o 00:03:01.650 CC lib/scsi/scsi_rpc.o 00:03:01.908 CC lib/scsi/task.o 00:03:01.908 CC lib/ftl/ftl_rq.o 00:03:01.908 CC lib/ftl/ftl_reloc.o 00:03:01.908 CC lib/ftl/ftl_l2p_cache.o 00:03:01.908 CC lib/ftl/ftl_p2l.o 00:03:02.167 CC lib/ftl/ftl_p2l_log.o 00:03:02.167 CC lib/nvmf/subsystem.o 00:03:02.167 CC lib/nvmf/nvmf.o 00:03:02.167 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.167 CC lib/nvmf/nvmf_rpc.o 00:03:02.167 LIB libspdk_scsi.a 00:03:02.426 SO libspdk_scsi.so.9.0 00:03:02.426 CC lib/nvmf/transport.o 00:03:02.426 SYMLINK libspdk_scsi.so 00:03:02.426 CC lib/nvmf/tcp.o 00:03:02.426 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.426 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.426 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.426 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.426 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.684 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.684 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.684 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.684 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.684 CC lib/nvmf/stubs.o 00:03:02.684 CC lib/nvmf/mdns_server.o 00:03:02.684 CC lib/nvmf/rdma.o 00:03:02.942 CC lib/nvmf/auth.o 00:03:02.942 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.942 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.942 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.942 CC lib/ftl/utils/ftl_conf.o 00:03:02.942 CC lib/ftl/utils/ftl_md.o 00:03:03.201 CC lib/ftl/utils/ftl_mempool.o 00:03:03.201 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.201 CC lib/ftl/utils/ftl_property.o 00:03:03.201 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.201 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.201 CC lib/iscsi/conn.o 00:03:03.201 CC lib/iscsi/init_grp.o 00:03:03.201 CC lib/iscsi/iscsi.o 00:03:03.201 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.458 CC lib/iscsi/param.o 00:03:03.458 CC lib/iscsi/portal_grp.o 00:03:03.458 CC lib/vhost/vhost.o 00:03:03.458 CC lib/vhost/vhost_rpc.o 00:03:03.458 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.458 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.716 CC lib/iscsi/tgt_node.o 00:03:03.716 CC lib/iscsi/iscsi_subsystem.o 00:03:03.716 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.716 CC lib/iscsi/iscsi_rpc.o 00:03:03.716 CC lib/iscsi/task.o 00:03:03.974 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.974 CC lib/vhost/vhost_scsi.o 00:03:03.974 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.974 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.974 CC lib/vhost/vhost_blk.o 00:03:03.974 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.974 CC lib/vhost/rte_vhost_user.o 00:03:04.232 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.232 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.232 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.232 CC lib/ftl/base/ftl_base_dev.o 00:03:04.232 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.232 CC lib/ftl/ftl_trace.o 00:03:04.490 LIB libspdk_ftl.a 00:03:04.747 SO libspdk_ftl.so.9.0 00:03:04.747 LIB libspdk_iscsi.a 00:03:04.747 SO libspdk_iscsi.so.8.0 00:03:04.747 SYMLINK libspdk_iscsi.so 00:03:05.005 LIB libspdk_vhost.a 00:03:05.005 SYMLINK libspdk_ftl.so 00:03:05.005 SO libspdk_vhost.so.8.0 00:03:05.005 LIB libspdk_nvmf.a 00:03:05.005 SYMLINK libspdk_vhost.so 00:03:05.005 SO libspdk_nvmf.so.20.0 00:03:05.262 SYMLINK libspdk_nvmf.so 00:03:05.519 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.519 CC module/sock/posix/posix.o 00:03:05.519 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.519 CC module/keyring/linux/keyring.o 00:03:05.519 CC module/fsdev/aio/fsdev_aio.o 00:03:05.519 CC module/keyring/file/keyring.o 00:03:05.519 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.519 CC module/blob/bdev/blob_bdev.o 00:03:05.519 CC module/accel/error/accel_error.o 00:03:05.519 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.519 LIB libspdk_env_dpdk_rpc.a 00:03:05.519 SO libspdk_env_dpdk_rpc.so.6.0 00:03:05.777 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.777 CC module/keyring/file/keyring_rpc.o 00:03:05.777 CC module/accel/error/accel_error_rpc.o 00:03:05.777 LIB libspdk_scheduler_gscheduler.a 00:03:05.777 CC module/keyring/linux/keyring_rpc.o 00:03:05.777 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.777 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.777 LIB libspdk_scheduler_dynamic.a 00:03:05.777 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.777 SO libspdk_scheduler_dynamic.so.4.0 00:03:05.777 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.777 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:05.777 CC module/fsdev/aio/linux_aio_mgr.o 00:03:05.777 LIB libspdk_keyring_file.a 00:03:05.777 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.777 LIB libspdk_blob_bdev.a 00:03:05.777 SO libspdk_keyring_file.so.2.0 00:03:05.777 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.777 LIB libspdk_keyring_linux.a 00:03:05.777 SO libspdk_blob_bdev.so.12.0 00:03:05.777 LIB libspdk_accel_error.a 00:03:05.777 SO libspdk_keyring_linux.so.1.0 00:03:05.777 SYMLINK libspdk_keyring_file.so 00:03:05.777 SO libspdk_accel_error.so.2.0 00:03:05.777 SYMLINK libspdk_blob_bdev.so 00:03:05.777 SYMLINK libspdk_keyring_linux.so 00:03:05.777 SYMLINK libspdk_accel_error.so 00:03:06.036 CC module/accel/ioat/accel_ioat.o 00:03:06.036 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.036 CC module/accel/dsa/accel_dsa.o 00:03:06.036 CC module/accel/iaa/accel_iaa.o 00:03:06.036 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.036 CC module/bdev/error/vbdev_error.o 00:03:06.036 CC module/bdev/gpt/gpt.o 00:03:06.036 CC module/bdev/delay/vbdev_delay.o 00:03:06.036 LIB libspdk_accel_ioat.a 00:03:06.036 CC module/blobfs/bdev/blobfs_bdev.o 00:03:06.036 SO libspdk_accel_ioat.so.6.0 00:03:06.036 LIB libspdk_fsdev_aio.a 00:03:06.036 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:06.036 LIB libspdk_accel_iaa.a 00:03:06.036 SO libspdk_fsdev_aio.so.1.0 00:03:06.294 SYMLINK libspdk_accel_ioat.so 00:03:06.294 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.294 SO libspdk_accel_iaa.so.3.0 00:03:06.294 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.294 SYMLINK libspdk_fsdev_aio.so 00:03:06.294 SYMLINK libspdk_accel_iaa.so 00:03:06.294 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.294 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.294 LIB libspdk_blobfs_bdev.a 00:03:06.294 SO libspdk_blobfs_bdev.so.6.0 00:03:06.294 LIB libspdk_sock_posix.a 00:03:06.294 SO libspdk_sock_posix.so.6.0 00:03:06.294 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.294 LIB libspdk_accel_dsa.a 00:03:06.294 CC module/bdev/malloc/bdev_malloc.o 00:03:06.294 SYMLINK libspdk_blobfs_bdev.so 00:03:06.294 CC module/bdev/null/bdev_null.o 00:03:06.294 CC module/bdev/null/bdev_null_rpc.o 00:03:06.294 LIB libspdk_bdev_error.a 00:03:06.294 SO libspdk_accel_dsa.so.5.0 00:03:06.294 SO libspdk_bdev_error.so.6.0 00:03:06.294 LIB libspdk_bdev_gpt.a 00:03:06.294 LIB libspdk_bdev_delay.a 00:03:06.294 SYMLINK libspdk_sock_posix.so 00:03:06.294 SO libspdk_bdev_gpt.so.6.0 00:03:06.294 SO libspdk_bdev_delay.so.6.0 00:03:06.552 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.552 SYMLINK libspdk_accel_dsa.so 00:03:06.552 SYMLINK libspdk_bdev_error.so 00:03:06.552 SYMLINK libspdk_bdev_delay.so 00:03:06.552 CC module/bdev/nvme/bdev_nvme.o 00:03:06.552 SYMLINK libspdk_bdev_gpt.so 00:03:06.552 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.552 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.552 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.552 CC module/bdev/raid/bdev_raid.o 00:03:06.552 LIB libspdk_bdev_null.a 00:03:06.552 CC module/bdev/split/vbdev_split.o 00:03:06.552 SO libspdk_bdev_null.so.6.0 00:03:06.552 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.552 SYMLINK libspdk_bdev_null.so 00:03:06.552 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.809 LIB libspdk_bdev_malloc.a 00:03:06.809 CC module/bdev/nvme/nvme_rpc.o 00:03:06.809 SO libspdk_bdev_malloc.so.6.0 00:03:06.809 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.809 LIB libspdk_bdev_split.a 00:03:06.809 SO libspdk_bdev_split.so.6.0 00:03:06.809 LIB libspdk_bdev_lvol.a 00:03:06.809 SYMLINK libspdk_bdev_malloc.so 00:03:06.809 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.809 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.809 SO libspdk_bdev_lvol.so.6.0 00:03:06.809 SYMLINK libspdk_bdev_split.so 00:03:06.809 CC module/bdev/raid/raid0.o 00:03:06.809 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.809 SYMLINK libspdk_bdev_lvol.so 00:03:06.809 CC module/bdev/raid/raid1.o 00:03:06.810 LIB libspdk_bdev_passthru.a 00:03:06.810 CC module/bdev/raid/concat.o 00:03:06.810 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.067 SO libspdk_bdev_passthru.so.6.0 00:03:07.067 LIB libspdk_bdev_zone_block.a 00:03:07.067 SO libspdk_bdev_zone_block.so.6.0 00:03:07.067 SYMLINK libspdk_bdev_passthru.so 00:03:07.067 CC module/bdev/nvme/vbdev_opal.o 00:03:07.067 SYMLINK libspdk_bdev_zone_block.so 00:03:07.067 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.067 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.067 CC module/bdev/xnvme/bdev_xnvme.o 00:03:07.067 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:07.326 CC module/bdev/aio/bdev_aio.o 00:03:07.326 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.326 CC module/bdev/ftl/bdev_ftl.o 00:03:07.326 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.326 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.326 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.326 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.326 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.326 LIB libspdk_bdev_xnvme.a 00:03:07.326 SO libspdk_bdev_xnvme.so.3.0 00:03:07.326 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.586 LIB libspdk_bdev_ftl.a 00:03:07.586 SYMLINK libspdk_bdev_xnvme.so 00:03:07.586 SO libspdk_bdev_ftl.so.6.0 00:03:07.586 SYMLINK libspdk_bdev_ftl.so 00:03:07.586 LIB libspdk_bdev_aio.a 00:03:07.586 SO libspdk_bdev_aio.so.6.0 00:03:07.586 LIB libspdk_bdev_iscsi.a 00:03:07.586 SO libspdk_bdev_iscsi.so.6.0 00:03:07.586 LIB libspdk_bdev_raid.a 00:03:07.586 SYMLINK libspdk_bdev_aio.so 00:03:07.586 SYMLINK libspdk_bdev_iscsi.so 00:03:07.586 SO libspdk_bdev_raid.so.6.0 00:03:07.848 LIB libspdk_bdev_virtio.a 00:03:07.848 SO libspdk_bdev_virtio.so.6.0 00:03:07.848 SYMLINK libspdk_bdev_raid.so 00:03:07.848 SYMLINK libspdk_bdev_virtio.so 00:03:08.781 LIB libspdk_bdev_nvme.a 00:03:08.781 SO libspdk_bdev_nvme.so.7.1 00:03:08.781 SYMLINK libspdk_bdev_nvme.so 00:03:09.039 CC module/event/subsystems/keyring/keyring.o 00:03:09.039 CC module/event/subsystems/vmd/vmd.o 00:03:09.039 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.039 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.039 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.039 CC module/event/subsystems/fsdev/fsdev.o 00:03:09.039 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.039 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.039 CC module/event/subsystems/sock/sock.o 00:03:09.296 LIB libspdk_event_vhost_blk.a 00:03:09.296 LIB libspdk_event_sock.a 00:03:09.296 LIB libspdk_event_scheduler.a 00:03:09.296 LIB libspdk_event_keyring.a 00:03:09.296 SO libspdk_event_vhost_blk.so.3.0 00:03:09.296 SO libspdk_event_scheduler.so.4.0 00:03:09.296 LIB libspdk_event_fsdev.a 00:03:09.296 SO libspdk_event_sock.so.5.0 00:03:09.296 LIB libspdk_event_vmd.a 00:03:09.296 LIB libspdk_event_iobuf.a 00:03:09.296 SO libspdk_event_keyring.so.1.0 00:03:09.296 SO libspdk_event_fsdev.so.1.0 00:03:09.296 SO libspdk_event_vmd.so.6.0 00:03:09.296 SO libspdk_event_iobuf.so.3.0 00:03:09.296 SYMLINK libspdk_event_vhost_blk.so 00:03:09.296 SYMLINK libspdk_event_scheduler.so 00:03:09.296 SYMLINK libspdk_event_sock.so 00:03:09.296 SYMLINK libspdk_event_keyring.so 00:03:09.296 SYMLINK libspdk_event_fsdev.so 00:03:09.296 SYMLINK libspdk_event_iobuf.so 00:03:09.296 SYMLINK libspdk_event_vmd.so 00:03:09.555 CC module/event/subsystems/accel/accel.o 00:03:09.555 LIB libspdk_event_accel.a 00:03:09.813 SO libspdk_event_accel.so.6.0 00:03:09.813 SYMLINK libspdk_event_accel.so 00:03:10.072 CC module/event/subsystems/bdev/bdev.o 00:03:10.072 LIB libspdk_event_bdev.a 00:03:10.072 SO libspdk_event_bdev.so.6.0 00:03:10.330 SYMLINK libspdk_event_bdev.so 00:03:10.330 CC module/event/subsystems/ublk/ublk.o 00:03:10.330 CC module/event/subsystems/nbd/nbd.o 00:03:10.330 CC module/event/subsystems/scsi/scsi.o 00:03:10.330 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.330 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.591 LIB libspdk_event_nbd.a 00:03:10.591 LIB libspdk_event_ublk.a 00:03:10.591 SO libspdk_event_nbd.so.6.0 00:03:10.591 LIB libspdk_event_scsi.a 00:03:10.591 SO libspdk_event_ublk.so.3.0 00:03:10.591 SO libspdk_event_scsi.so.6.0 00:03:10.591 SYMLINK libspdk_event_nbd.so 00:03:10.591 SYMLINK libspdk_event_ublk.so 00:03:10.591 LIB libspdk_event_nvmf.a 00:03:10.591 SYMLINK libspdk_event_scsi.so 00:03:10.591 SO libspdk_event_nvmf.so.6.0 00:03:10.591 SYMLINK libspdk_event_nvmf.so 00:03:10.861 CC module/event/subsystems/iscsi/iscsi.o 00:03:10.861 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:10.861 LIB libspdk_event_vhost_scsi.a 00:03:10.861 LIB libspdk_event_iscsi.a 00:03:10.861 SO libspdk_event_vhost_scsi.so.3.0 00:03:10.861 SO libspdk_event_iscsi.so.6.0 00:03:10.861 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.861 SYMLINK libspdk_event_iscsi.so 00:03:11.119 SO libspdk.so.6.0 00:03:11.119 SYMLINK libspdk.so 00:03:11.379 TEST_HEADER include/spdk/accel.h 00:03:11.379 CC test/rpc_client/rpc_client_test.o 00:03:11.379 TEST_HEADER include/spdk/accel_module.h 00:03:11.379 TEST_HEADER include/spdk/assert.h 00:03:11.379 TEST_HEADER include/spdk/barrier.h 00:03:11.379 TEST_HEADER include/spdk/base64.h 00:03:11.379 TEST_HEADER include/spdk/bdev.h 00:03:11.379 CXX app/trace/trace.o 00:03:11.379 TEST_HEADER include/spdk/bdev_module.h 00:03:11.379 TEST_HEADER include/spdk/bdev_zone.h 00:03:11.379 TEST_HEADER include/spdk/bit_array.h 00:03:11.379 TEST_HEADER include/spdk/bit_pool.h 00:03:11.379 TEST_HEADER include/spdk/blob_bdev.h 00:03:11.379 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:11.379 TEST_HEADER include/spdk/blobfs.h 00:03:11.379 TEST_HEADER include/spdk/blob.h 00:03:11.379 TEST_HEADER include/spdk/conf.h 00:03:11.379 TEST_HEADER include/spdk/config.h 00:03:11.379 TEST_HEADER include/spdk/cpuset.h 00:03:11.379 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:11.379 TEST_HEADER include/spdk/crc16.h 00:03:11.379 TEST_HEADER include/spdk/crc32.h 00:03:11.379 TEST_HEADER include/spdk/crc64.h 00:03:11.379 TEST_HEADER include/spdk/dif.h 00:03:11.379 TEST_HEADER include/spdk/dma.h 00:03:11.379 TEST_HEADER include/spdk/endian.h 00:03:11.379 TEST_HEADER include/spdk/env_dpdk.h 00:03:11.379 TEST_HEADER include/spdk/env.h 00:03:11.379 TEST_HEADER include/spdk/event.h 00:03:11.379 TEST_HEADER include/spdk/fd_group.h 00:03:11.379 TEST_HEADER include/spdk/fd.h 00:03:11.379 TEST_HEADER include/spdk/file.h 00:03:11.379 TEST_HEADER include/spdk/fsdev.h 00:03:11.379 TEST_HEADER include/spdk/fsdev_module.h 00:03:11.379 CC examples/util/zipf/zipf.o 00:03:11.379 TEST_HEADER include/spdk/ftl.h 00:03:11.379 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:11.379 TEST_HEADER include/spdk/gpt_spec.h 00:03:11.379 TEST_HEADER include/spdk/hexlify.h 00:03:11.379 TEST_HEADER include/spdk/histogram_data.h 00:03:11.379 TEST_HEADER include/spdk/idxd.h 00:03:11.379 TEST_HEADER include/spdk/idxd_spec.h 00:03:11.379 CC test/thread/poller_perf/poller_perf.o 00:03:11.379 TEST_HEADER include/spdk/init.h 00:03:11.379 TEST_HEADER include/spdk/ioat.h 00:03:11.379 CC examples/ioat/perf/perf.o 00:03:11.379 TEST_HEADER include/spdk/ioat_spec.h 00:03:11.379 TEST_HEADER include/spdk/iscsi_spec.h 00:03:11.379 TEST_HEADER include/spdk/json.h 00:03:11.379 TEST_HEADER include/spdk/jsonrpc.h 00:03:11.379 TEST_HEADER include/spdk/keyring.h 00:03:11.379 TEST_HEADER include/spdk/keyring_module.h 00:03:11.379 TEST_HEADER include/spdk/likely.h 00:03:11.379 TEST_HEADER include/spdk/log.h 00:03:11.379 TEST_HEADER include/spdk/lvol.h 00:03:11.379 TEST_HEADER include/spdk/md5.h 00:03:11.379 TEST_HEADER include/spdk/memory.h 00:03:11.379 TEST_HEADER include/spdk/mmio.h 00:03:11.379 TEST_HEADER include/spdk/nbd.h 00:03:11.379 TEST_HEADER include/spdk/net.h 00:03:11.379 TEST_HEADER include/spdk/notify.h 00:03:11.379 TEST_HEADER include/spdk/nvme.h 00:03:11.379 CC test/app/bdev_svc/bdev_svc.o 00:03:11.379 TEST_HEADER include/spdk/nvme_intel.h 00:03:11.379 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:11.379 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:11.379 TEST_HEADER include/spdk/nvme_spec.h 00:03:11.379 CC test/dma/test_dma/test_dma.o 00:03:11.379 TEST_HEADER include/spdk/nvme_zns.h 00:03:11.379 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:11.379 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:11.379 TEST_HEADER include/spdk/nvmf.h 00:03:11.379 TEST_HEADER include/spdk/nvmf_spec.h 00:03:11.379 TEST_HEADER include/spdk/nvmf_transport.h 00:03:11.379 CC test/env/mem_callbacks/mem_callbacks.o 00:03:11.379 TEST_HEADER include/spdk/opal.h 00:03:11.379 TEST_HEADER include/spdk/opal_spec.h 00:03:11.379 TEST_HEADER include/spdk/pci_ids.h 00:03:11.379 TEST_HEADER include/spdk/pipe.h 00:03:11.379 TEST_HEADER include/spdk/queue.h 00:03:11.379 TEST_HEADER include/spdk/reduce.h 00:03:11.379 TEST_HEADER include/spdk/rpc.h 00:03:11.379 TEST_HEADER include/spdk/scheduler.h 00:03:11.379 TEST_HEADER include/spdk/scsi.h 00:03:11.379 TEST_HEADER include/spdk/scsi_spec.h 00:03:11.379 TEST_HEADER include/spdk/sock.h 00:03:11.379 TEST_HEADER include/spdk/stdinc.h 00:03:11.379 TEST_HEADER include/spdk/string.h 00:03:11.379 TEST_HEADER include/spdk/thread.h 00:03:11.379 TEST_HEADER include/spdk/trace.h 00:03:11.379 TEST_HEADER include/spdk/trace_parser.h 00:03:11.379 TEST_HEADER include/spdk/tree.h 00:03:11.379 TEST_HEADER include/spdk/ublk.h 00:03:11.379 TEST_HEADER include/spdk/util.h 00:03:11.379 TEST_HEADER include/spdk/uuid.h 00:03:11.379 TEST_HEADER include/spdk/version.h 00:03:11.379 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:11.379 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:11.379 TEST_HEADER include/spdk/vhost.h 00:03:11.379 TEST_HEADER include/spdk/vmd.h 00:03:11.379 TEST_HEADER include/spdk/xor.h 00:03:11.379 TEST_HEADER include/spdk/zipf.h 00:03:11.379 CXX test/cpp_headers/accel.o 00:03:11.379 LINK zipf 00:03:11.379 LINK rpc_client_test 00:03:11.379 LINK interrupt_tgt 00:03:11.379 LINK poller_perf 00:03:11.638 LINK bdev_svc 00:03:11.638 LINK ioat_perf 00:03:11.638 CXX test/cpp_headers/accel_module.o 00:03:11.638 LINK spdk_trace 00:03:11.638 CC examples/ioat/verify/verify.o 00:03:11.638 CC app/trace_record/trace_record.o 00:03:11.638 CXX test/cpp_headers/assert.o 00:03:11.638 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.638 CC app/nvmf_tgt/nvmf_main.o 00:03:11.638 CC test/app/histogram_perf/histogram_perf.o 00:03:11.638 CC app/iscsi_tgt/iscsi_tgt.o 00:03:11.897 CC test/event/event_perf/event_perf.o 00:03:11.897 CXX test/cpp_headers/barrier.o 00:03:11.897 LINK verify 00:03:11.897 LINK test_dma 00:03:11.897 LINK histogram_perf 00:03:11.897 LINK nvmf_tgt 00:03:11.897 LINK mem_callbacks 00:03:11.897 LINK spdk_trace_record 00:03:11.897 LINK event_perf 00:03:11.897 CXX test/cpp_headers/base64.o 00:03:11.897 LINK iscsi_tgt 00:03:11.897 LINK nvme_fuzz 00:03:12.156 CC test/app/jsoncat/jsoncat.o 00:03:12.156 CC test/env/vtophys/vtophys.o 00:03:12.156 CXX test/cpp_headers/bdev.o 00:03:12.156 CC test/app/stub/stub.o 00:03:12.156 CC examples/sock/hello_world/hello_sock.o 00:03:12.156 CC test/event/reactor/reactor.o 00:03:12.156 CC examples/thread/thread/thread_ex.o 00:03:12.156 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.156 LINK jsoncat 00:03:12.156 LINK reactor 00:03:12.156 CXX test/cpp_headers/bdev_module.o 00:03:12.156 LINK vtophys 00:03:12.156 LINK stub 00:03:12.156 CC app/spdk_tgt/spdk_tgt.o 00:03:12.156 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:12.414 LINK lsvmd 00:03:12.414 CXX test/cpp_headers/bdev_zone.o 00:03:12.414 LINK hello_sock 00:03:12.414 LINK thread 00:03:12.414 CC test/event/reactor_perf/reactor_perf.o 00:03:12.414 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:12.414 CC examples/vmd/led/led.o 00:03:12.414 CC test/event/app_repeat/app_repeat.o 00:03:12.414 LINK spdk_tgt 00:03:12.414 CXX test/cpp_headers/bit_array.o 00:03:12.414 CC test/env/memory/memory_ut.o 00:03:12.414 CC test/env/pci/pci_ut.o 00:03:12.414 LINK reactor_perf 00:03:12.414 LINK led 00:03:12.673 LINK env_dpdk_post_init 00:03:12.673 LINK app_repeat 00:03:12.673 CXX test/cpp_headers/bit_pool.o 00:03:12.673 CC test/event/scheduler/scheduler.o 00:03:12.673 CXX test/cpp_headers/blob_bdev.o 00:03:12.673 CC app/spdk_lspci/spdk_lspci.o 00:03:12.673 CC app/spdk_nvme_perf/perf.o 00:03:12.673 CC app/spdk_nvme_identify/identify.o 00:03:12.673 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.673 CC examples/idxd/perf/perf.o 00:03:12.673 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.673 LINK spdk_lspci 00:03:12.931 LINK pci_ut 00:03:12.931 LINK scheduler 00:03:12.931 CXX test/cpp_headers/blobfs.o 00:03:12.931 LINK spdk_nvme_discover 00:03:12.931 CXX test/cpp_headers/blob.o 00:03:12.931 CXX test/cpp_headers/conf.o 00:03:12.931 LINK idxd_perf 00:03:13.190 CXX test/cpp_headers/config.o 00:03:13.190 CC examples/accel/perf/accel_perf.o 00:03:13.190 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:13.190 CXX test/cpp_headers/cpuset.o 00:03:13.190 CC app/spdk_top/spdk_top.o 00:03:13.190 CC examples/nvme/hello_world/hello_world.o 00:03:13.190 CC examples/blob/hello_world/hello_blob.o 00:03:13.190 CXX test/cpp_headers/crc16.o 00:03:13.449 LINK hello_fsdev 00:03:13.449 LINK hello_blob 00:03:13.449 CXX test/cpp_headers/crc32.o 00:03:13.449 LINK hello_world 00:03:13.449 CXX test/cpp_headers/crc64.o 00:03:13.449 LINK spdk_nvme_identify 00:03:13.449 LINK accel_perf 00:03:13.449 LINK memory_ut 00:03:13.709 CXX test/cpp_headers/dif.o 00:03:13.709 LINK spdk_nvme_perf 00:03:13.709 CC examples/nvme/reconnect/reconnect.o 00:03:13.709 CC app/vhost/vhost.o 00:03:13.709 CXX test/cpp_headers/dma.o 00:03:13.709 CC examples/blob/cli/blobcli.o 00:03:13.709 CXX test/cpp_headers/endian.o 00:03:13.709 CXX test/cpp_headers/env_dpdk.o 00:03:13.709 LINK vhost 00:03:13.970 CC app/spdk_dd/spdk_dd.o 00:03:13.970 CC examples/bdev/hello_world/hello_bdev.o 00:03:13.970 CXX test/cpp_headers/env.o 00:03:13.970 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.970 CC app/fio/nvme/fio_plugin.o 00:03:13.970 LINK iscsi_fuzz 00:03:13.970 LINK reconnect 00:03:13.970 CXX test/cpp_headers/event.o 00:03:13.970 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.970 LINK blobcli 00:03:13.970 CC app/fio/bdev/fio_plugin.o 00:03:13.970 LINK hello_bdev 00:03:14.228 CXX test/cpp_headers/fd_group.o 00:03:14.228 LINK spdk_top 00:03:14.228 LINK spdk_dd 00:03:14.228 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.228 CXX test/cpp_headers/fd.o 00:03:14.228 CC examples/bdev/bdevperf/bdevperf.o 00:03:14.228 CC test/accel/dif/dif.o 00:03:14.228 CC test/blobfs/mkfs/mkfs.o 00:03:14.487 CXX test/cpp_headers/file.o 00:03:14.487 LINK vhost_fuzz 00:03:14.487 CC test/nvme/aer/aer.o 00:03:14.487 CC test/lvol/esnap/esnap.o 00:03:14.487 LINK spdk_nvme 00:03:14.487 CXX test/cpp_headers/fsdev.o 00:03:14.487 LINK mkfs 00:03:14.487 LINK spdk_bdev 00:03:14.487 LINK nvme_manage 00:03:14.487 CC test/nvme/reset/reset.o 00:03:14.747 CC test/nvme/sgl/sgl.o 00:03:14.747 CXX test/cpp_headers/fsdev_module.o 00:03:14.747 CXX test/cpp_headers/ftl.o 00:03:14.747 LINK aer 00:03:14.747 CXX test/cpp_headers/fuse_dispatcher.o 00:03:14.747 CC examples/nvme/arbitration/arbitration.o 00:03:14.747 CXX test/cpp_headers/gpt_spec.o 00:03:14.747 CXX test/cpp_headers/hexlify.o 00:03:14.747 LINK reset 00:03:14.747 CC examples/nvme/hotplug/hotplug.o 00:03:14.747 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.006 LINK sgl 00:03:15.006 LINK dif 00:03:15.006 CXX test/cpp_headers/histogram_data.o 00:03:15.006 CC test/nvme/e2edp/nvme_dp.o 00:03:15.006 LINK arbitration 00:03:15.006 LINK cmb_copy 00:03:15.006 CC test/nvme/overhead/overhead.o 00:03:15.006 CXX test/cpp_headers/idxd.o 00:03:15.006 LINK hotplug 00:03:15.006 CC test/nvme/err_injection/err_injection.o 00:03:15.264 LINK bdevperf 00:03:15.264 CC test/nvme/startup/startup.o 00:03:15.264 CC test/nvme/reserve/reserve.o 00:03:15.264 CXX test/cpp_headers/idxd_spec.o 00:03:15.264 CC test/nvme/simple_copy/simple_copy.o 00:03:15.264 LINK nvme_dp 00:03:15.264 LINK err_injection 00:03:15.264 LINK startup 00:03:15.264 CC examples/nvme/abort/abort.o 00:03:15.264 LINK overhead 00:03:15.264 CC test/nvme/connect_stress/connect_stress.o 00:03:15.264 CXX test/cpp_headers/init.o 00:03:15.265 CXX test/cpp_headers/ioat.o 00:03:15.265 CXX test/cpp_headers/ioat_spec.o 00:03:15.522 LINK reserve 00:03:15.522 LINK simple_copy 00:03:15.522 CXX test/cpp_headers/iscsi_spec.o 00:03:15.522 CC test/nvme/boot_partition/boot_partition.o 00:03:15.522 CXX test/cpp_headers/json.o 00:03:15.522 LINK connect_stress 00:03:15.522 CC test/nvme/compliance/nvme_compliance.o 00:03:15.523 CC test/nvme/fused_ordering/fused_ordering.o 00:03:15.523 CC test/bdev/bdevio/bdevio.o 00:03:15.523 LINK abort 00:03:15.523 LINK boot_partition 00:03:15.781 CXX test/cpp_headers/jsonrpc.o 00:03:15.781 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:15.781 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:15.781 CC test/nvme/fdp/fdp.o 00:03:15.781 CXX test/cpp_headers/keyring.o 00:03:15.781 LINK fused_ordering 00:03:15.781 CC test/nvme/cuse/cuse.o 00:03:15.781 CXX test/cpp_headers/keyring_module.o 00:03:15.781 LINK nvme_compliance 00:03:15.781 LINK doorbell_aers 00:03:15.781 CXX test/cpp_headers/likely.o 00:03:15.781 LINK pmr_persistence 00:03:15.781 CXX test/cpp_headers/log.o 00:03:15.781 CXX test/cpp_headers/lvol.o 00:03:16.039 CXX test/cpp_headers/md5.o 00:03:16.039 LINK bdevio 00:03:16.039 CXX test/cpp_headers/memory.o 00:03:16.039 CXX test/cpp_headers/mmio.o 00:03:16.039 CXX test/cpp_headers/nbd.o 00:03:16.039 CXX test/cpp_headers/net.o 00:03:16.039 CXX test/cpp_headers/notify.o 00:03:16.039 CXX test/cpp_headers/nvme.o 00:03:16.039 LINK fdp 00:03:16.039 CXX test/cpp_headers/nvme_intel.o 00:03:16.039 CXX test/cpp_headers/nvme_ocssd.o 00:03:16.039 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.039 CXX test/cpp_headers/nvme_spec.o 00:03:16.297 CXX test/cpp_headers/nvme_zns.o 00:03:16.297 CC examples/nvmf/nvmf/nvmf.o 00:03:16.297 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.297 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:16.297 CXX test/cpp_headers/nvmf.o 00:03:16.297 CXX test/cpp_headers/nvmf_spec.o 00:03:16.297 CXX test/cpp_headers/nvmf_transport.o 00:03:16.297 CXX test/cpp_headers/opal.o 00:03:16.297 CXX test/cpp_headers/opal_spec.o 00:03:16.297 CXX test/cpp_headers/pci_ids.o 00:03:16.297 CXX test/cpp_headers/pipe.o 00:03:16.297 CXX test/cpp_headers/queue.o 00:03:16.297 CXX test/cpp_headers/reduce.o 00:03:16.297 CXX test/cpp_headers/rpc.o 00:03:16.297 CXX test/cpp_headers/scheduler.o 00:03:16.555 CXX test/cpp_headers/scsi.o 00:03:16.555 CXX test/cpp_headers/scsi_spec.o 00:03:16.555 CXX test/cpp_headers/sock.o 00:03:16.555 LINK nvmf 00:03:16.555 CXX test/cpp_headers/stdinc.o 00:03:16.555 CXX test/cpp_headers/string.o 00:03:16.555 CXX test/cpp_headers/thread.o 00:03:16.555 CXX test/cpp_headers/trace.o 00:03:16.555 CXX test/cpp_headers/trace_parser.o 00:03:16.555 CXX test/cpp_headers/tree.o 00:03:16.555 CXX test/cpp_headers/ublk.o 00:03:16.555 CXX test/cpp_headers/util.o 00:03:16.555 CXX test/cpp_headers/uuid.o 00:03:16.555 CXX test/cpp_headers/version.o 00:03:16.555 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.555 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.555 CXX test/cpp_headers/vhost.o 00:03:16.555 CXX test/cpp_headers/vmd.o 00:03:16.813 CXX test/cpp_headers/xor.o 00:03:16.813 CXX test/cpp_headers/zipf.o 00:03:16.813 LINK cuse 00:03:19.346 LINK esnap 00:03:19.608 00:03:19.608 real 1m5.926s 00:03:19.608 user 6m5.306s 00:03:19.608 sys 1m5.304s 00:03:19.608 09:35:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:19.608 ************************************ 00:03:19.608 09:35:07 make -- common/autotest_common.sh@10 -- $ set +x 00:03:19.608 END TEST make 00:03:19.608 ************************************ 00:03:19.608 09:35:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:19.608 09:35:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:19.608 09:35:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:19.608 09:35:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.608 09:35:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:19.608 09:35:07 -- pm/common@44 -- $ pid=5072 00:03:19.608 09:35:07 -- pm/common@50 -- $ kill -TERM 5072 00:03:19.608 09:35:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.608 09:35:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:19.608 09:35:07 -- pm/common@44 -- $ pid=5073 00:03:19.608 09:35:07 -- pm/common@50 -- $ kill -TERM 5073 00:03:19.608 09:35:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:19.608 09:35:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:19.608 09:35:07 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:19.608 09:35:07 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:19.608 09:35:07 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:19.870 09:35:07 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:19.870 09:35:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:19.870 09:35:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:19.870 09:35:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:19.870 09:35:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:19.870 09:35:07 -- scripts/common.sh@336 -- # read -ra ver1 00:03:19.870 09:35:07 -- scripts/common.sh@337 -- # IFS=.-: 00:03:19.870 09:35:07 -- scripts/common.sh@337 -- # read -ra ver2 00:03:19.870 09:35:07 -- scripts/common.sh@338 -- # local 'op=<' 00:03:19.870 09:35:07 -- scripts/common.sh@340 -- # ver1_l=2 00:03:19.870 09:35:07 -- scripts/common.sh@341 -- # ver2_l=1 00:03:19.870 09:35:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:19.870 09:35:07 -- scripts/common.sh@344 -- # case "$op" in 00:03:19.870 09:35:07 -- scripts/common.sh@345 -- # : 1 00:03:19.870 09:35:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:19.870 09:35:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:19.870 09:35:07 -- scripts/common.sh@365 -- # decimal 1 00:03:19.870 09:35:07 -- scripts/common.sh@353 -- # local d=1 00:03:19.870 09:35:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:19.870 09:35:07 -- scripts/common.sh@355 -- # echo 1 00:03:19.870 09:35:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:19.870 09:35:07 -- scripts/common.sh@366 -- # decimal 2 00:03:19.870 09:35:07 -- scripts/common.sh@353 -- # local d=2 00:03:19.870 09:35:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:19.870 09:35:07 -- scripts/common.sh@355 -- # echo 2 00:03:19.870 09:35:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:19.870 09:35:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:19.870 09:35:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:19.870 09:35:07 -- scripts/common.sh@368 -- # return 0 00:03:19.870 09:35:07 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:19.870 09:35:07 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.870 --rc genhtml_branch_coverage=1 00:03:19.870 --rc genhtml_function_coverage=1 00:03:19.870 --rc genhtml_legend=1 00:03:19.870 --rc geninfo_all_blocks=1 00:03:19.870 --rc geninfo_unexecuted_blocks=1 00:03:19.870 00:03:19.870 ' 00:03:19.870 09:35:07 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.870 --rc genhtml_branch_coverage=1 00:03:19.870 --rc genhtml_function_coverage=1 00:03:19.870 --rc genhtml_legend=1 00:03:19.870 --rc geninfo_all_blocks=1 00:03:19.870 --rc geninfo_unexecuted_blocks=1 00:03:19.870 00:03:19.870 ' 00:03:19.870 09:35:07 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.870 --rc genhtml_branch_coverage=1 00:03:19.870 --rc genhtml_function_coverage=1 00:03:19.870 --rc genhtml_legend=1 00:03:19.870 --rc geninfo_all_blocks=1 00:03:19.870 --rc geninfo_unexecuted_blocks=1 00:03:19.870 00:03:19.870 ' 00:03:19.870 09:35:07 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.870 --rc genhtml_branch_coverage=1 00:03:19.870 --rc genhtml_function_coverage=1 00:03:19.870 --rc genhtml_legend=1 00:03:19.870 --rc geninfo_all_blocks=1 00:03:19.870 --rc geninfo_unexecuted_blocks=1 00:03:19.870 00:03:19.870 ' 00:03:19.870 09:35:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:19.870 09:35:07 -- nvmf/common.sh@7 -- # uname -s 00:03:19.870 09:35:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.870 09:35:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.870 09:35:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.870 09:35:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.870 09:35:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.870 09:35:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.870 09:35:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.870 09:35:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.870 09:35:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.870 09:35:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.870 09:35:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:415bc8b4-eaf2-4ed5-80fd-e40086c58160 00:03:19.870 09:35:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=415bc8b4-eaf2-4ed5-80fd-e40086c58160 00:03:19.870 09:35:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.870 09:35:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.870 09:35:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:19.870 09:35:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:19.870 09:35:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:19.870 09:35:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:19.870 09:35:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.870 09:35:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.870 09:35:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.870 09:35:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.870 09:35:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.870 09:35:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.870 09:35:07 -- paths/export.sh@5 -- # export PATH 00:03:19.870 09:35:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.870 09:35:07 -- nvmf/common.sh@51 -- # : 0 00:03:19.870 09:35:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:19.871 09:35:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:19.871 09:35:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:19.871 09:35:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.871 09:35:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.871 09:35:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:19.871 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:19.871 09:35:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:19.871 09:35:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:19.871 09:35:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:19.871 09:35:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.871 09:35:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.871 09:35:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.871 09:35:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:19.871 09:35:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.871 09:35:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.871 09:35:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.871 09:35:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.871 09:35:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.871 09:35:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:19.871 09:35:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54254 00:03:19.871 09:35:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:19.871 09:35:07 -- pm/common@17 -- # local monitor 00:03:19.871 09:35:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.871 09:35:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.871 09:35:07 -- pm/common@25 -- # sleep 1 00:03:19.871 09:35:07 -- pm/common@21 -- # date +%s 00:03:19.871 09:35:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:19.871 09:35:07 -- pm/common@21 -- # date +%s 00:03:19.871 09:35:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733391307 00:03:19.871 09:35:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733391307 00:03:19.871 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733391307_collect-cpu-load.pm.log 00:03:19.871 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733391307_collect-vmstat.pm.log 00:03:20.814 09:35:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.814 09:35:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:20.814 09:35:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:20.814 09:35:08 -- common/autotest_common.sh@10 -- # set +x 00:03:20.814 09:35:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:20.814 09:35:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:20.814 09:35:08 -- common/autotest_common.sh@10 -- # set +x 00:03:20.814 09:35:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:20.814 09:35:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:20.814 09:35:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:20.814 09:35:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:20.814 09:35:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:20.814 09:35:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:20.814 09:35:08 -- common/autotest_common.sh@1457 -- # uname 00:03:20.814 09:35:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:20.814 09:35:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.814 09:35:08 -- common/autotest_common.sh@1477 -- # uname 00:03:20.814 09:35:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:20.814 09:35:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:20.814 09:35:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:21.075 lcov: LCOV version 1.15 00:03:21.075 09:35:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:36.003 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:36.003 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:50.905 09:35:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:50.905 09:35:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.905 09:35:36 -- common/autotest_common.sh@10 -- # set +x 00:03:50.905 09:35:36 -- spdk/autotest.sh@78 -- # rm -f 00:03:50.905 09:35:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.905 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:50.905 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:50.905 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:50.905 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:50.905 09:35:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:50.905 09:35:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:50.905 09:35:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:50.905 09:35:37 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:50.905 09:35:37 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:50.905 09:35:37 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:50.905 09:35:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:50.905 09:35:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:50.905 09:35:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:50.905 09:35:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:03:50.905 09:35:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:50.905 09:35:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:50.905 09:35:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:50.905 09:35:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:50.905 09:35:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:50.905 09:35:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:50.905 09:35:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:50.905 09:35:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:50.905 09:35:37 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:50.905 09:35:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:03:50.905 09:35:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:50.905 09:35:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:50.905 09:35:37 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:50.905 09:35:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:50.905 09:35:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:50.905 09:35:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:50.905 09:35:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.905 09:35:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:50.905 09:35:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:50.905 09:35:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:50.905 09:35:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:50.905 No valid GPT data, bailing 00:03:50.905 09:35:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.905 09:35:37 -- scripts/common.sh@394 -- # pt= 00:03:50.905 09:35:37 -- scripts/common.sh@395 -- # return 1 00:03:50.905 09:35:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:50.905 1+0 records in 00:03:50.905 1+0 records out 00:03:50.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025968 s, 40.4 MB/s 00:03:50.905 09:35:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.905 09:35:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:50.905 09:35:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:50.905 09:35:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:50.905 09:35:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:50.905 No valid GPT data, bailing 00:03:50.905 09:35:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # pt= 00:03:50.905 09:35:38 -- scripts/common.sh@395 -- # return 1 00:03:50.905 09:35:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:50.905 1+0 records in 00:03:50.905 1+0 records out 00:03:50.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551032 s, 190 MB/s 00:03:50.905 09:35:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.905 09:35:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:50.905 09:35:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:50.905 09:35:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:50.905 09:35:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:50.905 No valid GPT data, bailing 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # pt= 00:03:50.905 09:35:38 -- scripts/common.sh@395 -- # return 1 00:03:50.905 09:35:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:50.905 1+0 records in 00:03:50.905 1+0 records out 00:03:50.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0055081 s, 190 MB/s 00:03:50.905 09:35:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.905 09:35:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:50.905 09:35:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:50.905 09:35:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:50.905 09:35:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:50.905 No valid GPT data, bailing 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # pt= 00:03:50.905 09:35:38 -- scripts/common.sh@395 -- # return 1 00:03:50.905 09:35:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:50.905 1+0 records in 00:03:50.905 1+0 records out 00:03:50.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571115 s, 184 MB/s 00:03:50.905 09:35:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.905 09:35:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:50.905 09:35:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:50.905 09:35:38 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:50.905 09:35:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:50.905 No valid GPT data, bailing 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # pt= 00:03:50.905 09:35:38 -- scripts/common.sh@395 -- # return 1 00:03:50.905 09:35:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:50.905 1+0 records in 00:03:50.905 1+0 records out 00:03:50.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573327 s, 183 MB/s 00:03:50.905 09:35:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.905 09:35:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:50.905 09:35:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:50.905 09:35:38 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:50.905 09:35:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:50.905 No valid GPT data, bailing 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:50.905 09:35:38 -- scripts/common.sh@394 -- # pt= 00:03:50.905 09:35:38 -- scripts/common.sh@395 -- # return 1 00:03:50.905 09:35:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:50.905 1+0 records in 00:03:50.905 1+0 records out 00:03:50.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00607053 s, 173 MB/s 00:03:50.905 09:35:38 -- spdk/autotest.sh@105 -- # sync 00:03:50.905 09:35:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:50.905 09:35:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:50.905 09:35:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:52.818 09:35:40 -- spdk/autotest.sh@111 -- # uname -s 00:03:52.818 09:35:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:52.818 09:35:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:52.818 09:35:40 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:53.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.368 Hugepages 00:03:53.368 node hugesize free / total 00:03:53.629 node0 1048576kB 0 / 0 00:03:53.629 node0 2048kB 0 / 0 00:03:53.629 00:03:53.629 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.629 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:53.629 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:53.629 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:53.890 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:53.890 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:53.890 09:35:41 -- spdk/autotest.sh@117 -- # uname -s 00:03:53.890 09:35:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:53.890 09:35:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:53.890 09:35:41 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.036 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.036 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.036 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.036 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.036 09:35:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:55.979 09:35:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:55.980 09:35:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:55.980 09:35:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:55.980 09:35:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:55.980 09:35:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:55.980 09:35:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:55.980 09:35:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.980 09:35:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:55.980 09:35:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:56.241 09:35:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:56.241 09:35:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:56.241 09:35:43 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.503 Waiting for block devices as requested 00:03:56.503 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:56.765 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:56.765 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:56.765 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.057 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:02.058 09:35:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.058 09:35:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.058 09:35:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.058 09:35:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:02.058 09:35:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:02.058 09:35:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.058 09:35:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.058 09:35:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.058 09:35:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1543 -- # continue 00:04:02.058 09:35:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.058 09:35:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.058 09:35:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.058 09:35:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.058 09:35:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1543 -- # continue 00:04:02.058 09:35:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.058 09:35:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.058 09:35:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.058 09:35:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.058 09:35:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1543 -- # continue 00:04:02.058 09:35:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.058 09:35:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.058 09:35:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.058 09:35:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.058 09:35:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.058 09:35:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.058 09:35:49 -- common/autotest_common.sh@1543 -- # continue 00:04:02.058 09:35:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.058 09:35:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.058 09:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:02.058 09:35:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:02.058 09:35:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.058 09:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:02.058 09:35:49 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.201 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.201 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.201 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.201 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.461 09:35:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.462 09:35:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.462 09:35:50 -- common/autotest_common.sh@10 -- # set +x 00:04:03.462 09:35:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.462 09:35:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:03.462 09:35:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.462 09:35:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:03.462 09:35:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:03.462 09:35:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:03.462 09:35:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.462 09:35:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:03.462 09:35:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.462 09:35:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.462 09:35:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.462 09:35:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.462 09:35:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:03.462 09:35:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:03.462 09:35:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:03.462 09:35:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.462 09:35:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:03.462 09:35:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:03.462 09:35:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.462 09:35:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.462 09:35:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:03.462 09:35:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:03.462 09:35:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.462 09:35:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.462 09:35:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:03.462 09:35:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:03.462 09:35:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.462 09:35:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.462 09:35:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:03.462 09:35:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:03.462 09:35:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.462 09:35:50 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:03.462 09:35:50 -- common/autotest_common.sh@1572 -- # return 0 00:04:03.462 09:35:50 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:03.462 09:35:50 -- common/autotest_common.sh@1580 -- # return 0 00:04:03.462 09:35:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:03.462 09:35:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:03.462 09:35:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.462 09:35:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.462 09:35:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:03.462 09:35:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.462 09:35:50 -- common/autotest_common.sh@10 -- # set +x 00:04:03.462 09:35:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:03.462 09:35:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.462 09:35:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.462 09:35:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.462 09:35:50 -- common/autotest_common.sh@10 -- # set +x 00:04:03.462 ************************************ 00:04:03.462 START TEST env 00:04:03.462 ************************************ 00:04:03.462 09:35:51 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.462 * Looking for test storage... 00:04:03.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:03.462 09:35:51 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:03.462 09:35:51 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:03.462 09:35:51 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:03.723 09:35:51 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:03.723 09:35:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.723 09:35:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.723 09:35:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.723 09:35:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.723 09:35:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.723 09:35:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.723 09:35:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.723 09:35:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.723 09:35:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.723 09:35:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.723 09:35:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.723 09:35:51 env -- scripts/common.sh@344 -- # case "$op" in 00:04:03.723 09:35:51 env -- scripts/common.sh@345 -- # : 1 00:04:03.723 09:35:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.723 09:35:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.723 09:35:51 env -- scripts/common.sh@365 -- # decimal 1 00:04:03.723 09:35:51 env -- scripts/common.sh@353 -- # local d=1 00:04:03.723 09:35:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.723 09:35:51 env -- scripts/common.sh@355 -- # echo 1 00:04:03.723 09:35:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.723 09:35:51 env -- scripts/common.sh@366 -- # decimal 2 00:04:03.723 09:35:51 env -- scripts/common.sh@353 -- # local d=2 00:04:03.723 09:35:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.723 09:35:51 env -- scripts/common.sh@355 -- # echo 2 00:04:03.723 09:35:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.723 09:35:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.723 09:35:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.723 09:35:51 env -- scripts/common.sh@368 -- # return 0 00:04:03.723 09:35:51 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.723 09:35:51 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:03.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.723 --rc genhtml_branch_coverage=1 00:04:03.723 --rc genhtml_function_coverage=1 00:04:03.723 --rc genhtml_legend=1 00:04:03.723 --rc geninfo_all_blocks=1 00:04:03.723 --rc geninfo_unexecuted_blocks=1 00:04:03.723 00:04:03.723 ' 00:04:03.723 09:35:51 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:03.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.723 --rc genhtml_branch_coverage=1 00:04:03.723 --rc genhtml_function_coverage=1 00:04:03.723 --rc genhtml_legend=1 00:04:03.723 --rc geninfo_all_blocks=1 00:04:03.723 --rc geninfo_unexecuted_blocks=1 00:04:03.723 00:04:03.723 ' 00:04:03.723 09:35:51 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:03.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.724 --rc genhtml_branch_coverage=1 00:04:03.724 --rc genhtml_function_coverage=1 00:04:03.724 --rc genhtml_legend=1 00:04:03.724 --rc geninfo_all_blocks=1 00:04:03.724 --rc geninfo_unexecuted_blocks=1 00:04:03.724 00:04:03.724 ' 00:04:03.724 09:35:51 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:03.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.724 --rc genhtml_branch_coverage=1 00:04:03.724 --rc genhtml_function_coverage=1 00:04:03.724 --rc genhtml_legend=1 00:04:03.724 --rc geninfo_all_blocks=1 00:04:03.724 --rc geninfo_unexecuted_blocks=1 00:04:03.724 00:04:03.724 ' 00:04:03.724 09:35:51 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.724 09:35:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.724 09:35:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.724 09:35:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.724 ************************************ 00:04:03.724 START TEST env_memory 00:04:03.724 ************************************ 00:04:03.724 09:35:51 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.724 00:04:03.724 00:04:03.724 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.724 http://cunit.sourceforge.net/ 00:04:03.724 00:04:03.724 00:04:03.724 Suite: memory 00:04:03.724 Test: alloc and free memory map ...[2024-12-05 09:35:51.230082] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.724 passed 00:04:03.724 Test: mem map translation ...[2024-12-05 09:35:51.269105] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.724 [2024-12-05 09:35:51.269160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.724 [2024-12-05 09:35:51.269223] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.724 [2024-12-05 09:35:51.269236] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.724 passed 00:04:03.724 Test: mem map registration ...[2024-12-05 09:35:51.337415] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:03.724 [2024-12-05 09:35:51.337464] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:03.985 passed 00:04:03.985 Test: mem map adjacent registrations ...passed 00:04:03.986 00:04:03.986 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.986 suites 1 1 n/a 0 0 00:04:03.986 tests 4 4 4 0 0 00:04:03.986 asserts 152 152 152 0 n/a 00:04:03.986 00:04:03.986 Elapsed time = 0.233 seconds 00:04:03.986 00:04:03.986 real 0m0.269s 00:04:03.986 user 0m0.238s 00:04:03.986 sys 0m0.024s 00:04:03.986 09:35:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.986 09:35:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:03.986 ************************************ 00:04:03.986 END TEST env_memory 00:04:03.986 ************************************ 00:04:03.986 09:35:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:03.986 09:35:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.986 09:35:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.986 09:35:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.986 ************************************ 00:04:03.986 START TEST env_vtophys 00:04:03.986 ************************************ 00:04:03.986 09:35:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:03.986 EAL: lib.eal log level changed from notice to debug 00:04:03.986 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 1 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 2 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 3 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 4 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 5 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 6 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 7 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 8 as core 0 on socket 0 00:04:03.986 EAL: Detected lcore 9 as core 0 on socket 0 00:04:03.986 EAL: Maximum logical cores by configuration: 128 00:04:03.986 EAL: Detected CPU lcores: 10 00:04:03.986 EAL: Detected NUMA nodes: 1 00:04:03.986 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:03.986 EAL: Detected shared linkage of DPDK 00:04:03.986 EAL: No shared files mode enabled, IPC will be disabled 00:04:03.986 EAL: Selected IOVA mode 'PA' 00:04:03.986 EAL: Probing VFIO support... 00:04:03.986 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:03.986 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:03.986 EAL: Ask a virtual area of 0x2e000 bytes 00:04:03.986 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:03.986 EAL: Setting up physically contiguous memory... 00:04:03.986 EAL: Setting maximum number of open files to 524288 00:04:03.986 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:03.986 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:03.986 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.986 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:03.986 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.986 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.986 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:03.986 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:03.986 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.986 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:03.986 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.986 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.986 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:03.986 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:03.986 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.986 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:03.986 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.986 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.986 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:03.986 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:03.986 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.986 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:03.986 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.986 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.986 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:03.986 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:03.986 EAL: Hugepages will be freed exactly as allocated. 00:04:03.986 EAL: No shared files mode enabled, IPC is disabled 00:04:03.986 EAL: No shared files mode enabled, IPC is disabled 00:04:04.248 EAL: TSC frequency is ~2600000 KHz 00:04:04.248 EAL: Main lcore 0 is ready (tid=7f5289902a40;cpuset=[0]) 00:04:04.248 EAL: Trying to obtain current memory policy. 00:04:04.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.248 EAL: Restoring previous memory policy: 0 00:04:04.248 EAL: request: mp_malloc_sync 00:04:04.248 EAL: No shared files mode enabled, IPC is disabled 00:04:04.248 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.248 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.248 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.248 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.248 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:04.248 00:04:04.248 00:04:04.248 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.248 http://cunit.sourceforge.net/ 00:04:04.248 00:04:04.248 00:04:04.248 Suite: components_suite 00:04:04.510 Test: vtophys_malloc_test ...passed 00:04:04.510 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.510 EAL: Restoring previous memory policy: 4 00:04:04.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.510 EAL: request: mp_malloc_sync 00:04:04.510 EAL: No shared files mode enabled, IPC is disabled 00:04:04.510 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.510 EAL: request: mp_malloc_sync 00:04:04.510 EAL: No shared files mode enabled, IPC is disabled 00:04:04.510 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.510 EAL: Trying to obtain current memory policy. 00:04:04.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.510 EAL: Restoring previous memory policy: 4 00:04:04.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.510 EAL: request: mp_malloc_sync 00:04:04.510 EAL: No shared files mode enabled, IPC is disabled 00:04:04.510 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.510 EAL: request: mp_malloc_sync 00:04:04.510 EAL: No shared files mode enabled, IPC is disabled 00:04:04.510 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.510 EAL: Trying to obtain current memory policy. 00:04:04.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.510 EAL: Restoring previous memory policy: 4 00:04:04.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.510 EAL: request: mp_malloc_sync 00:04:04.510 EAL: No shared files mode enabled, IPC is disabled 00:04:04.510 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.510 EAL: request: mp_malloc_sync 00:04:04.510 EAL: No shared files mode enabled, IPC is disabled 00:04:04.510 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.510 EAL: Trying to obtain current memory policy. 00:04:04.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.771 EAL: Restoring previous memory policy: 4 00:04:04.771 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.771 EAL: request: mp_malloc_sync 00:04:04.771 EAL: No shared files mode enabled, IPC is disabled 00:04:04.771 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.771 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.771 EAL: request: mp_malloc_sync 00:04:04.771 EAL: No shared files mode enabled, IPC is disabled 00:04:04.771 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.771 EAL: Trying to obtain current memory policy. 00:04:04.771 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.771 EAL: Restoring previous memory policy: 4 00:04:04.771 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.771 EAL: request: mp_malloc_sync 00:04:04.771 EAL: No shared files mode enabled, IPC is disabled 00:04:04.771 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.771 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.771 EAL: request: mp_malloc_sync 00:04:04.771 EAL: No shared files mode enabled, IPC is disabled 00:04:04.771 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.771 EAL: Trying to obtain current memory policy. 00:04:04.771 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.771 EAL: Restoring previous memory policy: 4 00:04:04.771 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.771 EAL: request: mp_malloc_sync 00:04:04.771 EAL: No shared files mode enabled, IPC is disabled 00:04:04.771 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.771 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.771 EAL: request: mp_malloc_sync 00:04:04.771 EAL: No shared files mode enabled, IPC is disabled 00:04:04.771 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.032 EAL: Trying to obtain current memory policy. 00:04:05.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.032 EAL: Restoring previous memory policy: 4 00:04:05.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.032 EAL: request: mp_malloc_sync 00:04:05.032 EAL: No shared files mode enabled, IPC is disabled 00:04:05.032 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.032 EAL: request: mp_malloc_sync 00:04:05.032 EAL: No shared files mode enabled, IPC is disabled 00:04:05.032 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.294 EAL: Trying to obtain current memory policy. 00:04:05.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.294 EAL: Restoring previous memory policy: 4 00:04:05.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.294 EAL: request: mp_malloc_sync 00:04:05.294 EAL: No shared files mode enabled, IPC is disabled 00:04:05.294 EAL: Heap on socket 0 was expanded by 258MB 00:04:05.555 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.555 EAL: request: mp_malloc_sync 00:04:05.555 EAL: No shared files mode enabled, IPC is disabled 00:04:05.555 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.816 EAL: Trying to obtain current memory policy. 00:04:05.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.081 EAL: Restoring previous memory policy: 4 00:04:06.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.081 EAL: request: mp_malloc_sync 00:04:06.081 EAL: No shared files mode enabled, IPC is disabled 00:04:06.081 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.679 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.679 EAL: request: mp_malloc_sync 00:04:06.679 EAL: No shared files mode enabled, IPC is disabled 00:04:06.679 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.251 EAL: Trying to obtain current memory policy. 00:04:07.251 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.511 EAL: Restoring previous memory policy: 4 00:04:07.511 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.511 EAL: request: mp_malloc_sync 00:04:07.511 EAL: No shared files mode enabled, IPC is disabled 00:04:07.511 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.897 EAL: request: mp_malloc_sync 00:04:08.897 EAL: No shared files mode enabled, IPC is disabled 00:04:08.897 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:09.841 passed 00:04:09.841 00:04:09.841 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.841 suites 1 1 n/a 0 0 00:04:09.841 tests 2 2 2 0 0 00:04:09.841 asserts 5824 5824 5824 0 n/a 00:04:09.841 00:04:09.841 Elapsed time = 5.497 seconds 00:04:09.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.841 EAL: request: mp_malloc_sync 00:04:09.841 EAL: No shared files mode enabled, IPC is disabled 00:04:09.841 EAL: Heap on socket 0 was shrunk by 2MB 00:04:09.841 EAL: No shared files mode enabled, IPC is disabled 00:04:09.841 EAL: No shared files mode enabled, IPC is disabled 00:04:09.841 EAL: No shared files mode enabled, IPC is disabled 00:04:09.841 00:04:09.841 real 0m5.782s 00:04:09.841 user 0m4.744s 00:04:09.841 sys 0m0.877s 00:04:09.841 09:35:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.841 09:35:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:09.841 ************************************ 00:04:09.841 END TEST env_vtophys 00:04:09.841 ************************************ 00:04:09.841 09:35:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.841 09:35:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.841 09:35:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.841 09:35:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.841 ************************************ 00:04:09.841 START TEST env_pci 00:04:09.841 ************************************ 00:04:09.841 09:35:57 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.841 00:04:09.841 00:04:09.841 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.841 http://cunit.sourceforge.net/ 00:04:09.841 00:04:09.841 00:04:09.841 Suite: pci 00:04:09.841 Test: pci_hook ...[2024-12-05 09:35:57.378122] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57022 has claimed it 00:04:09.841 passed 00:04:09.841 00:04:09.841 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.841 suites 1 1 n/a 0 0 00:04:09.841 tests 1 1 1 0 0 00:04:09.841 asserts 25 25 25 0 n/a 00:04:09.841 00:04:09.841 Elapsed time = 0.005 seconds 00:04:09.841 EAL: Cannot find device (10000:00:01.0) 00:04:09.841 EAL: Failed to attach device on primary process 00:04:09.841 00:04:09.841 real 0m0.065s 00:04:09.841 user 0m0.033s 00:04:09.841 sys 0m0.031s 00:04:09.841 09:35:57 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.841 ************************************ 00:04:09.841 END TEST env_pci 00:04:09.841 ************************************ 00:04:09.841 09:35:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:09.841 09:35:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.841 09:35:57 env -- env/env.sh@15 -- # uname 00:04:09.841 09:35:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:09.841 09:35:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:09.841 09:35:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.841 09:35:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:09.841 09:35:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.841 09:35:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.841 ************************************ 00:04:09.841 START TEST env_dpdk_post_init 00:04:09.841 ************************************ 00:04:09.841 09:35:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.103 EAL: Detected CPU lcores: 10 00:04:10.103 EAL: Detected NUMA nodes: 1 00:04:10.103 EAL: Detected shared linkage of DPDK 00:04:10.103 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.103 EAL: Selected IOVA mode 'PA' 00:04:10.103 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.103 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:10.103 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:10.103 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:10.103 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:10.103 Starting DPDK initialization... 00:04:10.103 Starting SPDK post initialization... 00:04:10.103 SPDK NVMe probe 00:04:10.103 Attaching to 0000:00:10.0 00:04:10.103 Attaching to 0000:00:11.0 00:04:10.103 Attaching to 0000:00:12.0 00:04:10.103 Attaching to 0000:00:13.0 00:04:10.103 Attached to 0000:00:13.0 00:04:10.103 Attached to 0000:00:10.0 00:04:10.103 Attached to 0000:00:11.0 00:04:10.103 Attached to 0000:00:12.0 00:04:10.103 Cleaning up... 00:04:10.365 00:04:10.365 real 0m0.270s 00:04:10.365 user 0m0.092s 00:04:10.365 sys 0m0.080s 00:04:10.365 09:35:57 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.365 09:35:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.365 ************************************ 00:04:10.365 END TEST env_dpdk_post_init 00:04:10.365 ************************************ 00:04:10.365 09:35:57 env -- env/env.sh@26 -- # uname 00:04:10.365 09:35:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:10.365 09:35:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.365 09:35:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.365 09:35:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.365 09:35:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.365 ************************************ 00:04:10.365 START TEST env_mem_callbacks 00:04:10.365 ************************************ 00:04:10.365 09:35:57 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.365 EAL: Detected CPU lcores: 10 00:04:10.365 EAL: Detected NUMA nodes: 1 00:04:10.365 EAL: Detected shared linkage of DPDK 00:04:10.365 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.365 EAL: Selected IOVA mode 'PA' 00:04:10.365 00:04:10.365 00:04:10.365 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.365 http://cunit.sourceforge.net/ 00:04:10.365 00:04:10.365 00:04:10.365 Suite: memory 00:04:10.365 Test: test ... 00:04:10.365 register 0x200000200000 2097152 00:04:10.365 malloc 3145728 00:04:10.365 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.365 register 0x200000400000 4194304 00:04:10.365 buf 0x2000004fffc0 len 3145728 PASSED 00:04:10.365 malloc 64 00:04:10.365 buf 0x2000004ffec0 len 64 PASSED 00:04:10.365 malloc 4194304 00:04:10.365 register 0x200000800000 6291456 00:04:10.365 buf 0x2000009fffc0 len 4194304 PASSED 00:04:10.365 free 0x2000004fffc0 3145728 00:04:10.365 free 0x2000004ffec0 64 00:04:10.365 unregister 0x200000400000 4194304 PASSED 00:04:10.365 free 0x2000009fffc0 4194304 00:04:10.365 unregister 0x200000800000 6291456 PASSED 00:04:10.365 malloc 8388608 00:04:10.365 register 0x200000400000 10485760 00:04:10.365 buf 0x2000005fffc0 len 8388608 PASSED 00:04:10.365 free 0x2000005fffc0 8388608 00:04:10.365 unregister 0x200000400000 10485760 PASSED 00:04:10.365 passed 00:04:10.365 00:04:10.365 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.365 suites 1 1 n/a 0 0 00:04:10.365 tests 1 1 1 0 0 00:04:10.365 asserts 15 15 15 0 n/a 00:04:10.365 00:04:10.365 Elapsed time = 0.048 seconds 00:04:10.626 00:04:10.626 real 0m0.223s 00:04:10.626 user 0m0.065s 00:04:10.626 sys 0m0.054s 00:04:10.626 09:35:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.626 09:35:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:10.626 ************************************ 00:04:10.626 END TEST env_mem_callbacks 00:04:10.627 ************************************ 00:04:10.627 ************************************ 00:04:10.627 END TEST env 00:04:10.627 ************************************ 00:04:10.627 00:04:10.627 real 0m7.047s 00:04:10.627 user 0m5.327s 00:04:10.627 sys 0m1.284s 00:04:10.627 09:35:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.627 09:35:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.627 09:35:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.627 09:35:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.627 09:35:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.627 09:35:58 -- common/autotest_common.sh@10 -- # set +x 00:04:10.627 ************************************ 00:04:10.627 START TEST rpc 00:04:10.627 ************************************ 00:04:10.627 09:35:58 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.627 * Looking for test storage... 00:04:10.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.627 09:35:58 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.627 09:35:58 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.627 09:35:58 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:10.627 09:35:58 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:10.627 09:35:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.627 09:35:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.627 09:35:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.627 09:35:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.627 09:35:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.627 09:35:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.627 09:35:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.627 09:35:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.627 09:35:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.627 09:35:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.627 09:35:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.627 09:35:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.627 09:35:58 rpc -- scripts/common.sh@345 -- # : 1 00:04:10.627 09:35:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.627 09:35:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.627 09:35:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.627 09:35:58 rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.627 09:35:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.627 09:35:58 rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.627 09:35:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.627 09:35:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.627 09:35:58 rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.627 09:35:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.888 09:35:58 rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.888 09:35:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.888 09:35:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.888 09:35:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.888 09:35:58 rpc -- scripts/common.sh@368 -- # return 0 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.888 --rc genhtml_branch_coverage=1 00:04:10.888 --rc genhtml_function_coverage=1 00:04:10.888 --rc genhtml_legend=1 00:04:10.888 --rc geninfo_all_blocks=1 00:04:10.888 --rc geninfo_unexecuted_blocks=1 00:04:10.888 00:04:10.888 ' 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.888 --rc genhtml_branch_coverage=1 00:04:10.888 --rc genhtml_function_coverage=1 00:04:10.888 --rc genhtml_legend=1 00:04:10.888 --rc geninfo_all_blocks=1 00:04:10.888 --rc geninfo_unexecuted_blocks=1 00:04:10.888 00:04:10.888 ' 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.888 --rc genhtml_branch_coverage=1 00:04:10.888 --rc genhtml_function_coverage=1 00:04:10.888 --rc genhtml_legend=1 00:04:10.888 --rc geninfo_all_blocks=1 00:04:10.888 --rc geninfo_unexecuted_blocks=1 00:04:10.888 00:04:10.888 ' 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.888 --rc genhtml_branch_coverage=1 00:04:10.888 --rc genhtml_function_coverage=1 00:04:10.888 --rc genhtml_legend=1 00:04:10.888 --rc geninfo_all_blocks=1 00:04:10.888 --rc geninfo_unexecuted_blocks=1 00:04:10.888 00:04:10.888 ' 00:04:10.888 09:35:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57149 00:04:10.888 09:35:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.888 09:35:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57149 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@835 -- # '[' -z 57149 ']' 00:04:10.888 09:35:58 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.888 09:35:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.889 [2024-12-05 09:35:58.333843] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:10.889 [2024-12-05 09:35:58.334119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57149 ] 00:04:10.889 [2024-12-05 09:35:58.489889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.150 [2024-12-05 09:35:58.615769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:11.150 [2024-12-05 09:35:58.615838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57149' to capture a snapshot of events at runtime. 00:04:11.150 [2024-12-05 09:35:58.615850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:11.150 [2024-12-05 09:35:58.615861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:11.150 [2024-12-05 09:35:58.615870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57149 for offline analysis/debug. 00:04:11.150 [2024-12-05 09:35:58.616862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.723 09:35:59 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.723 09:35:59 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:11.723 09:35:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.723 09:35:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.723 09:35:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:11.723 09:35:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:11.723 09:35:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.723 09:35:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.723 09:35:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.723 ************************************ 00:04:11.723 START TEST rpc_integrity 00:04:11.723 ************************************ 00:04:11.723 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:11.723 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.723 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.723 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.723 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.723 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.723 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.984 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.985 { 00:04:11.985 "name": "Malloc0", 00:04:11.985 "aliases": [ 00:04:11.985 "1987eac2-e4d1-4b70-b83b-3ee4ad2b0cf4" 00:04:11.985 ], 00:04:11.985 "product_name": "Malloc disk", 00:04:11.985 "block_size": 512, 00:04:11.985 "num_blocks": 16384, 00:04:11.985 "uuid": "1987eac2-e4d1-4b70-b83b-3ee4ad2b0cf4", 00:04:11.985 "assigned_rate_limits": { 00:04:11.985 "rw_ios_per_sec": 0, 00:04:11.985 "rw_mbytes_per_sec": 0, 00:04:11.985 "r_mbytes_per_sec": 0, 00:04:11.985 "w_mbytes_per_sec": 0 00:04:11.985 }, 00:04:11.985 "claimed": false, 00:04:11.985 "zoned": false, 00:04:11.985 "supported_io_types": { 00:04:11.985 "read": true, 00:04:11.985 "write": true, 00:04:11.985 "unmap": true, 00:04:11.985 "flush": true, 00:04:11.985 "reset": true, 00:04:11.985 "nvme_admin": false, 00:04:11.985 "nvme_io": false, 00:04:11.985 "nvme_io_md": false, 00:04:11.985 "write_zeroes": true, 00:04:11.985 "zcopy": true, 00:04:11.985 "get_zone_info": false, 00:04:11.985 "zone_management": false, 00:04:11.985 "zone_append": false, 00:04:11.985 "compare": false, 00:04:11.985 "compare_and_write": false, 00:04:11.985 "abort": true, 00:04:11.985 "seek_hole": false, 00:04:11.985 "seek_data": false, 00:04:11.985 "copy": true, 00:04:11.985 "nvme_iov_md": false 00:04:11.985 }, 00:04:11.985 "memory_domains": [ 00:04:11.985 { 00:04:11.985 "dma_device_id": "system", 00:04:11.985 "dma_device_type": 1 00:04:11.985 }, 00:04:11.985 { 00:04:11.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.985 "dma_device_type": 2 00:04:11.985 } 00:04:11.985 ], 00:04:11.985 "driver_specific": {} 00:04:11.985 } 00:04:11.985 ]' 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.985 [2024-12-05 09:35:59.450174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.985 [2024-12-05 09:35:59.450249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.985 [2024-12-05 09:35:59.450279] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:11.985 [2024-12-05 09:35:59.450293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.985 [2024-12-05 09:35:59.452855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.985 [2024-12-05 09:35:59.453059] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.985 Passthru0 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.985 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.985 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.985 { 00:04:11.985 "name": "Malloc0", 00:04:11.985 "aliases": [ 00:04:11.985 "1987eac2-e4d1-4b70-b83b-3ee4ad2b0cf4" 00:04:11.985 ], 00:04:11.985 "product_name": "Malloc disk", 00:04:11.985 "block_size": 512, 00:04:11.985 "num_blocks": 16384, 00:04:11.985 "uuid": "1987eac2-e4d1-4b70-b83b-3ee4ad2b0cf4", 00:04:11.985 "assigned_rate_limits": { 00:04:11.985 "rw_ios_per_sec": 0, 00:04:11.985 "rw_mbytes_per_sec": 0, 00:04:11.985 "r_mbytes_per_sec": 0, 00:04:11.985 "w_mbytes_per_sec": 0 00:04:11.985 }, 00:04:11.985 "claimed": true, 00:04:11.985 "claim_type": "exclusive_write", 00:04:11.985 "zoned": false, 00:04:11.985 "supported_io_types": { 00:04:11.985 "read": true, 00:04:11.985 "write": true, 00:04:11.985 "unmap": true, 00:04:11.985 "flush": true, 00:04:11.985 "reset": true, 00:04:11.985 "nvme_admin": false, 00:04:11.985 "nvme_io": false, 00:04:11.985 "nvme_io_md": false, 00:04:11.985 "write_zeroes": true, 00:04:11.985 "zcopy": true, 00:04:11.985 "get_zone_info": false, 00:04:11.985 "zone_management": false, 00:04:11.985 "zone_append": false, 00:04:11.985 "compare": false, 00:04:11.985 "compare_and_write": false, 00:04:11.985 "abort": true, 00:04:11.985 "seek_hole": false, 00:04:11.985 "seek_data": false, 00:04:11.985 "copy": true, 00:04:11.985 "nvme_iov_md": false 00:04:11.985 }, 00:04:11.985 "memory_domains": [ 00:04:11.985 { 00:04:11.985 "dma_device_id": "system", 00:04:11.985 "dma_device_type": 1 00:04:11.985 }, 00:04:11.985 { 00:04:11.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.985 "dma_device_type": 2 00:04:11.985 } 00:04:11.985 ], 00:04:11.985 "driver_specific": {} 00:04:11.985 }, 00:04:11.985 { 00:04:11.985 "name": "Passthru0", 00:04:11.985 "aliases": [ 00:04:11.985 "c4e6edc0-e94c-5257-8378-5cb123c97a8c" 00:04:11.985 ], 00:04:11.985 "product_name": "passthru", 00:04:11.985 "block_size": 512, 00:04:11.985 "num_blocks": 16384, 00:04:11.985 "uuid": "c4e6edc0-e94c-5257-8378-5cb123c97a8c", 00:04:11.985 "assigned_rate_limits": { 00:04:11.985 "rw_ios_per_sec": 0, 00:04:11.985 "rw_mbytes_per_sec": 0, 00:04:11.985 "r_mbytes_per_sec": 0, 00:04:11.985 "w_mbytes_per_sec": 0 00:04:11.985 }, 00:04:11.985 "claimed": false, 00:04:11.985 "zoned": false, 00:04:11.985 "supported_io_types": { 00:04:11.985 "read": true, 00:04:11.985 "write": true, 00:04:11.985 "unmap": true, 00:04:11.985 "flush": true, 00:04:11.985 "reset": true, 00:04:11.985 "nvme_admin": false, 00:04:11.985 "nvme_io": false, 00:04:11.985 "nvme_io_md": false, 00:04:11.985 "write_zeroes": true, 00:04:11.985 "zcopy": true, 00:04:11.985 "get_zone_info": false, 00:04:11.985 "zone_management": false, 00:04:11.985 "zone_append": false, 00:04:11.985 "compare": false, 00:04:11.985 "compare_and_write": false, 00:04:11.985 "abort": true, 00:04:11.985 "seek_hole": false, 00:04:11.985 "seek_data": false, 00:04:11.986 "copy": true, 00:04:11.986 "nvme_iov_md": false 00:04:11.986 }, 00:04:11.986 "memory_domains": [ 00:04:11.986 { 00:04:11.986 "dma_device_id": "system", 00:04:11.986 "dma_device_type": 1 00:04:11.986 }, 00:04:11.986 { 00:04:11.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.986 "dma_device_type": 2 00:04:11.986 } 00:04:11.986 ], 00:04:11.986 "driver_specific": { 00:04:11.986 "passthru": { 00:04:11.986 "name": "Passthru0", 00:04:11.986 "base_bdev_name": "Malloc0" 00:04:11.986 } 00:04:11.986 } 00:04:11.986 } 00:04:11.986 ]' 00:04:11.986 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.986 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.986 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.986 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.986 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.986 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.986 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.986 ************************************ 00:04:11.986 END TEST rpc_integrity 00:04:11.986 ************************************ 00:04:11.986 09:35:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.986 00:04:11.986 real 0m0.261s 00:04:11.986 user 0m0.131s 00:04:11.986 sys 0m0.036s 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.986 09:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.247 09:35:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:12.247 09:35:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.247 09:35:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.247 09:35:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.247 ************************************ 00:04:12.247 START TEST rpc_plugins 00:04:12.247 ************************************ 00:04:12.247 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:12.247 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:12.247 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.247 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:12.247 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.247 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:12.247 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:12.247 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.247 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:12.247 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.247 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:12.247 { 00:04:12.247 "name": "Malloc1", 00:04:12.247 "aliases": [ 00:04:12.247 "6a72ad52-64c9-402c-b463-1967a7fbec82" 00:04:12.247 ], 00:04:12.247 "product_name": "Malloc disk", 00:04:12.247 "block_size": 4096, 00:04:12.247 "num_blocks": 256, 00:04:12.247 "uuid": "6a72ad52-64c9-402c-b463-1967a7fbec82", 00:04:12.247 "assigned_rate_limits": { 00:04:12.247 "rw_ios_per_sec": 0, 00:04:12.247 "rw_mbytes_per_sec": 0, 00:04:12.247 "r_mbytes_per_sec": 0, 00:04:12.247 "w_mbytes_per_sec": 0 00:04:12.247 }, 00:04:12.247 "claimed": false, 00:04:12.247 "zoned": false, 00:04:12.247 "supported_io_types": { 00:04:12.247 "read": true, 00:04:12.247 "write": true, 00:04:12.247 "unmap": true, 00:04:12.247 "flush": true, 00:04:12.247 "reset": true, 00:04:12.247 "nvme_admin": false, 00:04:12.247 "nvme_io": false, 00:04:12.247 "nvme_io_md": false, 00:04:12.247 "write_zeroes": true, 00:04:12.247 "zcopy": true, 00:04:12.247 "get_zone_info": false, 00:04:12.247 "zone_management": false, 00:04:12.247 "zone_append": false, 00:04:12.247 "compare": false, 00:04:12.247 "compare_and_write": false, 00:04:12.247 "abort": true, 00:04:12.247 "seek_hole": false, 00:04:12.247 "seek_data": false, 00:04:12.247 "copy": true, 00:04:12.247 "nvme_iov_md": false 00:04:12.247 }, 00:04:12.247 "memory_domains": [ 00:04:12.247 { 00:04:12.247 "dma_device_id": "system", 00:04:12.247 "dma_device_type": 1 00:04:12.247 }, 00:04:12.247 { 00:04:12.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.247 "dma_device_type": 2 00:04:12.247 } 00:04:12.247 ], 00:04:12.247 "driver_specific": {} 00:04:12.247 } 00:04:12.247 ]' 00:04:12.247 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:12.247 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:12.247 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:12.248 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.248 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.248 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:12.248 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.248 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.248 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:12.248 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:12.248 ************************************ 00:04:12.248 END TEST rpc_plugins 00:04:12.248 ************************************ 00:04:12.248 09:35:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:12.248 00:04:12.248 real 0m0.123s 00:04:12.248 user 0m0.066s 00:04:12.248 sys 0m0.018s 00:04:12.248 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.248 09:35:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 09:35:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:12.248 09:35:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.248 09:35:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.248 09:35:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 ************************************ 00:04:12.248 START TEST rpc_trace_cmd_test 00:04:12.248 ************************************ 00:04:12.248 09:35:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:12.248 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:12.248 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:12.248 09:35:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.248 09:35:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 09:35:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.248 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:12.248 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57149", 00:04:12.248 "tpoint_group_mask": "0x8", 00:04:12.248 "iscsi_conn": { 00:04:12.248 "mask": "0x2", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "scsi": { 00:04:12.248 "mask": "0x4", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "bdev": { 00:04:12.248 "mask": "0x8", 00:04:12.248 "tpoint_mask": "0xffffffffffffffff" 00:04:12.248 }, 00:04:12.248 "nvmf_rdma": { 00:04:12.248 "mask": "0x10", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "nvmf_tcp": { 00:04:12.248 "mask": "0x20", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "ftl": { 00:04:12.248 "mask": "0x40", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "blobfs": { 00:04:12.248 "mask": "0x80", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "dsa": { 00:04:12.248 "mask": "0x200", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "thread": { 00:04:12.248 "mask": "0x400", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "nvme_pcie": { 00:04:12.248 "mask": "0x800", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "iaa": { 00:04:12.248 "mask": "0x1000", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "nvme_tcp": { 00:04:12.248 "mask": "0x2000", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "bdev_nvme": { 00:04:12.248 "mask": "0x4000", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "sock": { 00:04:12.248 "mask": "0x8000", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "blob": { 00:04:12.248 "mask": "0x10000", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "bdev_raid": { 00:04:12.248 "mask": "0x20000", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 }, 00:04:12.248 "scheduler": { 00:04:12.248 "mask": "0x40000", 00:04:12.248 "tpoint_mask": "0x0" 00:04:12.248 } 00:04:12.248 }' 00:04:12.248 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:12.513 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:12.513 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:12.513 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:12.513 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:12.513 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:12.513 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:12.514 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:12.514 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:12.514 ************************************ 00:04:12.514 END TEST rpc_trace_cmd_test 00:04:12.514 ************************************ 00:04:12.514 09:35:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:12.514 00:04:12.514 real 0m0.161s 00:04:12.514 user 0m0.130s 00:04:12.514 sys 0m0.020s 00:04:12.514 09:35:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.514 09:35:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:12.514 09:36:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:12.514 09:36:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:12.514 09:36:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:12.514 09:36:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.514 09:36:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.514 09:36:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.514 ************************************ 00:04:12.514 START TEST rpc_daemon_integrity 00:04:12.514 ************************************ 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.514 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.775 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.775 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.775 { 00:04:12.775 "name": "Malloc2", 00:04:12.775 "aliases": [ 00:04:12.775 "1e4010d0-178f-471b-86d0-f1bde104b221" 00:04:12.775 ], 00:04:12.775 "product_name": "Malloc disk", 00:04:12.775 "block_size": 512, 00:04:12.775 "num_blocks": 16384, 00:04:12.775 "uuid": "1e4010d0-178f-471b-86d0-f1bde104b221", 00:04:12.775 "assigned_rate_limits": { 00:04:12.775 "rw_ios_per_sec": 0, 00:04:12.775 "rw_mbytes_per_sec": 0, 00:04:12.775 "r_mbytes_per_sec": 0, 00:04:12.775 "w_mbytes_per_sec": 0 00:04:12.775 }, 00:04:12.775 "claimed": false, 00:04:12.775 "zoned": false, 00:04:12.775 "supported_io_types": { 00:04:12.775 "read": true, 00:04:12.775 "write": true, 00:04:12.775 "unmap": true, 00:04:12.775 "flush": true, 00:04:12.775 "reset": true, 00:04:12.775 "nvme_admin": false, 00:04:12.775 "nvme_io": false, 00:04:12.775 "nvme_io_md": false, 00:04:12.775 "write_zeroes": true, 00:04:12.775 "zcopy": true, 00:04:12.775 "get_zone_info": false, 00:04:12.775 "zone_management": false, 00:04:12.775 "zone_append": false, 00:04:12.776 "compare": false, 00:04:12.776 "compare_and_write": false, 00:04:12.776 "abort": true, 00:04:12.776 "seek_hole": false, 00:04:12.776 "seek_data": false, 00:04:12.776 "copy": true, 00:04:12.776 "nvme_iov_md": false 00:04:12.776 }, 00:04:12.776 "memory_domains": [ 00:04:12.776 { 00:04:12.776 "dma_device_id": "system", 00:04:12.776 "dma_device_type": 1 00:04:12.776 }, 00:04:12.776 { 00:04:12.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.776 "dma_device_type": 2 00:04:12.776 } 00:04:12.776 ], 00:04:12.776 "driver_specific": {} 00:04:12.776 } 00:04:12.776 ]' 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.776 [2024-12-05 09:36:00.185902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:12.776 [2024-12-05 09:36:00.185978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.776 [2024-12-05 09:36:00.186002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:12.776 [2024-12-05 09:36:00.186013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.776 [2024-12-05 09:36:00.188538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.776 [2024-12-05 09:36:00.188587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.776 Passthru0 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.776 { 00:04:12.776 "name": "Malloc2", 00:04:12.776 "aliases": [ 00:04:12.776 "1e4010d0-178f-471b-86d0-f1bde104b221" 00:04:12.776 ], 00:04:12.776 "product_name": "Malloc disk", 00:04:12.776 "block_size": 512, 00:04:12.776 "num_blocks": 16384, 00:04:12.776 "uuid": "1e4010d0-178f-471b-86d0-f1bde104b221", 00:04:12.776 "assigned_rate_limits": { 00:04:12.776 "rw_ios_per_sec": 0, 00:04:12.776 "rw_mbytes_per_sec": 0, 00:04:12.776 "r_mbytes_per_sec": 0, 00:04:12.776 "w_mbytes_per_sec": 0 00:04:12.776 }, 00:04:12.776 "claimed": true, 00:04:12.776 "claim_type": "exclusive_write", 00:04:12.776 "zoned": false, 00:04:12.776 "supported_io_types": { 00:04:12.776 "read": true, 00:04:12.776 "write": true, 00:04:12.776 "unmap": true, 00:04:12.776 "flush": true, 00:04:12.776 "reset": true, 00:04:12.776 "nvme_admin": false, 00:04:12.776 "nvme_io": false, 00:04:12.776 "nvme_io_md": false, 00:04:12.776 "write_zeroes": true, 00:04:12.776 "zcopy": true, 00:04:12.776 "get_zone_info": false, 00:04:12.776 "zone_management": false, 00:04:12.776 "zone_append": false, 00:04:12.776 "compare": false, 00:04:12.776 "compare_and_write": false, 00:04:12.776 "abort": true, 00:04:12.776 "seek_hole": false, 00:04:12.776 "seek_data": false, 00:04:12.776 "copy": true, 00:04:12.776 "nvme_iov_md": false 00:04:12.776 }, 00:04:12.776 "memory_domains": [ 00:04:12.776 { 00:04:12.776 "dma_device_id": "system", 00:04:12.776 "dma_device_type": 1 00:04:12.776 }, 00:04:12.776 { 00:04:12.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.776 "dma_device_type": 2 00:04:12.776 } 00:04:12.776 ], 00:04:12.776 "driver_specific": {} 00:04:12.776 }, 00:04:12.776 { 00:04:12.776 "name": "Passthru0", 00:04:12.776 "aliases": [ 00:04:12.776 "488be3c1-90c9-56f9-85c7-b3a1d12d1a3b" 00:04:12.776 ], 00:04:12.776 "product_name": "passthru", 00:04:12.776 "block_size": 512, 00:04:12.776 "num_blocks": 16384, 00:04:12.776 "uuid": "488be3c1-90c9-56f9-85c7-b3a1d12d1a3b", 00:04:12.776 "assigned_rate_limits": { 00:04:12.776 "rw_ios_per_sec": 0, 00:04:12.776 "rw_mbytes_per_sec": 0, 00:04:12.776 "r_mbytes_per_sec": 0, 00:04:12.776 "w_mbytes_per_sec": 0 00:04:12.776 }, 00:04:12.776 "claimed": false, 00:04:12.776 "zoned": false, 00:04:12.776 "supported_io_types": { 00:04:12.776 "read": true, 00:04:12.776 "write": true, 00:04:12.776 "unmap": true, 00:04:12.776 "flush": true, 00:04:12.776 "reset": true, 00:04:12.776 "nvme_admin": false, 00:04:12.776 "nvme_io": false, 00:04:12.776 "nvme_io_md": false, 00:04:12.776 "write_zeroes": true, 00:04:12.776 "zcopy": true, 00:04:12.776 "get_zone_info": false, 00:04:12.776 "zone_management": false, 00:04:12.776 "zone_append": false, 00:04:12.776 "compare": false, 00:04:12.776 "compare_and_write": false, 00:04:12.776 "abort": true, 00:04:12.776 "seek_hole": false, 00:04:12.776 "seek_data": false, 00:04:12.776 "copy": true, 00:04:12.776 "nvme_iov_md": false 00:04:12.776 }, 00:04:12.776 "memory_domains": [ 00:04:12.776 { 00:04:12.776 "dma_device_id": "system", 00:04:12.776 "dma_device_type": 1 00:04:12.776 }, 00:04:12.776 { 00:04:12.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.776 "dma_device_type": 2 00:04:12.776 } 00:04:12.776 ], 00:04:12.776 "driver_specific": { 00:04:12.776 "passthru": { 00:04:12.776 "name": "Passthru0", 00:04:12.776 "base_bdev_name": "Malloc2" 00:04:12.776 } 00:04:12.776 } 00:04:12.776 } 00:04:12.776 ]' 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:12.776 ************************************ 00:04:12.776 END TEST rpc_daemon_integrity 00:04:12.776 ************************************ 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.776 00:04:12.776 real 0m0.257s 00:04:12.776 user 0m0.131s 00:04:12.776 sys 0m0.037s 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.776 09:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.776 09:36:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:12.776 09:36:00 rpc -- rpc/rpc.sh@84 -- # killprocess 57149 00:04:12.776 09:36:00 rpc -- common/autotest_common.sh@954 -- # '[' -z 57149 ']' 00:04:12.776 09:36:00 rpc -- common/autotest_common.sh@958 -- # kill -0 57149 00:04:12.776 09:36:00 rpc -- common/autotest_common.sh@959 -- # uname 00:04:12.776 09:36:00 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.776 09:36:00 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57149 00:04:13.038 09:36:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.038 killing process with pid 57149 00:04:13.038 09:36:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.038 09:36:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57149' 00:04:13.038 09:36:00 rpc -- common/autotest_common.sh@973 -- # kill 57149 00:04:13.038 09:36:00 rpc -- common/autotest_common.sh@978 -- # wait 57149 00:04:14.472 00:04:14.472 real 0m3.655s 00:04:14.472 user 0m4.000s 00:04:14.472 sys 0m0.704s 00:04:14.472 09:36:01 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.472 ************************************ 00:04:14.472 END TEST rpc 00:04:14.472 ************************************ 00:04:14.472 09:36:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.472 09:36:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:14.472 09:36:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.472 09:36:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.472 09:36:01 -- common/autotest_common.sh@10 -- # set +x 00:04:14.472 ************************************ 00:04:14.472 START TEST skip_rpc 00:04:14.472 ************************************ 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:14.472 * Looking for test storage... 00:04:14.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.472 09:36:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:14.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.472 --rc genhtml_branch_coverage=1 00:04:14.472 --rc genhtml_function_coverage=1 00:04:14.472 --rc genhtml_legend=1 00:04:14.472 --rc geninfo_all_blocks=1 00:04:14.472 --rc geninfo_unexecuted_blocks=1 00:04:14.472 00:04:14.472 ' 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:14.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.472 --rc genhtml_branch_coverage=1 00:04:14.472 --rc genhtml_function_coverage=1 00:04:14.472 --rc genhtml_legend=1 00:04:14.472 --rc geninfo_all_blocks=1 00:04:14.472 --rc geninfo_unexecuted_blocks=1 00:04:14.472 00:04:14.472 ' 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:14.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.472 --rc genhtml_branch_coverage=1 00:04:14.472 --rc genhtml_function_coverage=1 00:04:14.472 --rc genhtml_legend=1 00:04:14.472 --rc geninfo_all_blocks=1 00:04:14.472 --rc geninfo_unexecuted_blocks=1 00:04:14.472 00:04:14.472 ' 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:14.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.472 --rc genhtml_branch_coverage=1 00:04:14.472 --rc genhtml_function_coverage=1 00:04:14.472 --rc genhtml_legend=1 00:04:14.472 --rc geninfo_all_blocks=1 00:04:14.472 --rc geninfo_unexecuted_blocks=1 00:04:14.472 00:04:14.472 ' 00:04:14.472 09:36:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:14.472 09:36:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:14.472 09:36:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.472 09:36:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.472 ************************************ 00:04:14.472 START TEST skip_rpc 00:04:14.472 ************************************ 00:04:14.472 09:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:14.472 09:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57367 00:04:14.472 09:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.472 09:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:14.472 09:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:14.472 [2024-12-05 09:36:02.013750] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:14.472 [2024-12-05 09:36:02.014260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57367 ] 00:04:14.731 [2024-12-05 09:36:02.170433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.731 [2024-12-05 09:36:02.248800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57367 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57367 ']' 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57367 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57367 00:04:20.102 killing process with pid 57367 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57367' 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57367 00:04:20.102 09:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57367 00:04:20.675 00:04:20.675 real 0m6.203s 00:04:20.675 user 0m5.846s 00:04:20.675 sys 0m0.257s 00:04:20.675 ************************************ 00:04:20.675 END TEST skip_rpc 00:04:20.675 ************************************ 00:04:20.675 09:36:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.675 09:36:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.675 09:36:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:20.675 09:36:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.675 09:36:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.675 09:36:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.675 ************************************ 00:04:20.675 START TEST skip_rpc_with_json 00:04:20.675 ************************************ 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:20.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57460 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57460 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57460 ']' 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.675 09:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.675 [2024-12-05 09:36:08.247925] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:20.675 [2024-12-05 09:36:08.248147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57460 ] 00:04:20.936 [2024-12-05 09:36:08.396571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.936 [2024-12-05 09:36:08.475371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.504 [2024-12-05 09:36:09.086943] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:21.504 request: 00:04:21.504 { 00:04:21.504 "trtype": "tcp", 00:04:21.504 "method": "nvmf_get_transports", 00:04:21.504 "req_id": 1 00:04:21.504 } 00:04:21.504 Got JSON-RPC error response 00:04:21.504 response: 00:04:21.504 { 00:04:21.504 "code": -19, 00:04:21.504 "message": "No such device" 00:04:21.504 } 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.504 [2024-12-05 09:36:09.099035] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.504 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.764 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.764 09:36:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.764 { 00:04:21.764 "subsystems": [ 00:04:21.764 { 00:04:21.764 "subsystem": "fsdev", 00:04:21.764 "config": [ 00:04:21.764 { 00:04:21.764 "method": "fsdev_set_opts", 00:04:21.764 "params": { 00:04:21.764 "fsdev_io_pool_size": 65535, 00:04:21.764 "fsdev_io_cache_size": 256 00:04:21.764 } 00:04:21.764 } 00:04:21.764 ] 00:04:21.764 }, 00:04:21.764 { 00:04:21.764 "subsystem": "keyring", 00:04:21.764 "config": [] 00:04:21.764 }, 00:04:21.764 { 00:04:21.764 "subsystem": "iobuf", 00:04:21.764 "config": [ 00:04:21.764 { 00:04:21.764 "method": "iobuf_set_options", 00:04:21.764 "params": { 00:04:21.764 "small_pool_count": 8192, 00:04:21.764 "large_pool_count": 1024, 00:04:21.764 "small_bufsize": 8192, 00:04:21.764 "large_bufsize": 135168, 00:04:21.764 "enable_numa": false 00:04:21.764 } 00:04:21.764 } 00:04:21.764 ] 00:04:21.764 }, 00:04:21.764 { 00:04:21.764 "subsystem": "sock", 00:04:21.764 "config": [ 00:04:21.764 { 00:04:21.764 "method": "sock_set_default_impl", 00:04:21.764 "params": { 00:04:21.764 "impl_name": "posix" 00:04:21.764 } 00:04:21.764 }, 00:04:21.764 { 00:04:21.764 "method": "sock_impl_set_options", 00:04:21.764 "params": { 00:04:21.764 "impl_name": "ssl", 00:04:21.764 "recv_buf_size": 4096, 00:04:21.764 "send_buf_size": 4096, 00:04:21.764 "enable_recv_pipe": true, 00:04:21.764 "enable_quickack": false, 00:04:21.764 "enable_placement_id": 0, 00:04:21.764 "enable_zerocopy_send_server": true, 00:04:21.764 "enable_zerocopy_send_client": false, 00:04:21.764 "zerocopy_threshold": 0, 00:04:21.764 "tls_version": 0, 00:04:21.764 "enable_ktls": false 00:04:21.764 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "sock_impl_set_options", 00:04:21.765 "params": { 00:04:21.765 "impl_name": "posix", 00:04:21.765 "recv_buf_size": 2097152, 00:04:21.765 "send_buf_size": 2097152, 00:04:21.765 "enable_recv_pipe": true, 00:04:21.765 "enable_quickack": false, 00:04:21.765 "enable_placement_id": 0, 00:04:21.765 "enable_zerocopy_send_server": true, 00:04:21.765 "enable_zerocopy_send_client": false, 00:04:21.765 "zerocopy_threshold": 0, 00:04:21.765 "tls_version": 0, 00:04:21.765 "enable_ktls": false 00:04:21.765 } 00:04:21.765 } 00:04:21.765 ] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "vmd", 00:04:21.765 "config": [] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "accel", 00:04:21.765 "config": [ 00:04:21.765 { 00:04:21.765 "method": "accel_set_options", 00:04:21.765 "params": { 00:04:21.765 "small_cache_size": 128, 00:04:21.765 "large_cache_size": 16, 00:04:21.765 "task_count": 2048, 00:04:21.765 "sequence_count": 2048, 00:04:21.765 "buf_count": 2048 00:04:21.765 } 00:04:21.765 } 00:04:21.765 ] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "bdev", 00:04:21.765 "config": [ 00:04:21.765 { 00:04:21.765 "method": "bdev_set_options", 00:04:21.765 "params": { 00:04:21.765 "bdev_io_pool_size": 65535, 00:04:21.765 "bdev_io_cache_size": 256, 00:04:21.765 "bdev_auto_examine": true, 00:04:21.765 "iobuf_small_cache_size": 128, 00:04:21.765 "iobuf_large_cache_size": 16 00:04:21.765 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "bdev_raid_set_options", 00:04:21.765 "params": { 00:04:21.765 "process_window_size_kb": 1024, 00:04:21.765 "process_max_bandwidth_mb_sec": 0 00:04:21.765 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "bdev_iscsi_set_options", 00:04:21.765 "params": { 00:04:21.765 "timeout_sec": 30 00:04:21.765 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "bdev_nvme_set_options", 00:04:21.765 "params": { 00:04:21.765 "action_on_timeout": "none", 00:04:21.765 "timeout_us": 0, 00:04:21.765 "timeout_admin_us": 0, 00:04:21.765 "keep_alive_timeout_ms": 10000, 00:04:21.765 "arbitration_burst": 0, 00:04:21.765 "low_priority_weight": 0, 00:04:21.765 "medium_priority_weight": 0, 00:04:21.765 "high_priority_weight": 0, 00:04:21.765 "nvme_adminq_poll_period_us": 10000, 00:04:21.765 "nvme_ioq_poll_period_us": 0, 00:04:21.765 "io_queue_requests": 0, 00:04:21.765 "delay_cmd_submit": true, 00:04:21.765 "transport_retry_count": 4, 00:04:21.765 "bdev_retry_count": 3, 00:04:21.765 "transport_ack_timeout": 0, 00:04:21.765 "ctrlr_loss_timeout_sec": 0, 00:04:21.765 "reconnect_delay_sec": 0, 00:04:21.765 "fast_io_fail_timeout_sec": 0, 00:04:21.765 "disable_auto_failback": false, 00:04:21.765 "generate_uuids": false, 00:04:21.765 "transport_tos": 0, 00:04:21.765 "nvme_error_stat": false, 00:04:21.765 "rdma_srq_size": 0, 00:04:21.765 "io_path_stat": false, 00:04:21.765 "allow_accel_sequence": false, 00:04:21.765 "rdma_max_cq_size": 0, 00:04:21.765 "rdma_cm_event_timeout_ms": 0, 00:04:21.765 "dhchap_digests": [ 00:04:21.765 "sha256", 00:04:21.765 "sha384", 00:04:21.765 "sha512" 00:04:21.765 ], 00:04:21.765 "dhchap_dhgroups": [ 00:04:21.765 "null", 00:04:21.765 "ffdhe2048", 00:04:21.765 "ffdhe3072", 00:04:21.765 "ffdhe4096", 00:04:21.765 "ffdhe6144", 00:04:21.765 "ffdhe8192" 00:04:21.765 ] 00:04:21.765 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "bdev_nvme_set_hotplug", 00:04:21.765 "params": { 00:04:21.765 "period_us": 100000, 00:04:21.765 "enable": false 00:04:21.765 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "bdev_wait_for_examine" 00:04:21.765 } 00:04:21.765 ] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "scsi", 00:04:21.765 "config": null 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "scheduler", 00:04:21.765 "config": [ 00:04:21.765 { 00:04:21.765 "method": "framework_set_scheduler", 00:04:21.765 "params": { 00:04:21.765 "name": "static" 00:04:21.765 } 00:04:21.765 } 00:04:21.765 ] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "vhost_scsi", 00:04:21.765 "config": [] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "vhost_blk", 00:04:21.765 "config": [] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "ublk", 00:04:21.765 "config": [] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "nbd", 00:04:21.765 "config": [] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "nvmf", 00:04:21.765 "config": [ 00:04:21.765 { 00:04:21.765 "method": "nvmf_set_config", 00:04:21.765 "params": { 00:04:21.765 "discovery_filter": "match_any", 00:04:21.765 "admin_cmd_passthru": { 00:04:21.765 "identify_ctrlr": false 00:04:21.765 }, 00:04:21.765 "dhchap_digests": [ 00:04:21.765 "sha256", 00:04:21.765 "sha384", 00:04:21.765 "sha512" 00:04:21.765 ], 00:04:21.765 "dhchap_dhgroups": [ 00:04:21.765 "null", 00:04:21.765 "ffdhe2048", 00:04:21.765 "ffdhe3072", 00:04:21.765 "ffdhe4096", 00:04:21.765 "ffdhe6144", 00:04:21.765 "ffdhe8192" 00:04:21.765 ] 00:04:21.765 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "nvmf_set_max_subsystems", 00:04:21.765 "params": { 00:04:21.765 "max_subsystems": 1024 00:04:21.765 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "nvmf_set_crdt", 00:04:21.765 "params": { 00:04:21.765 "crdt1": 0, 00:04:21.765 "crdt2": 0, 00:04:21.765 "crdt3": 0 00:04:21.765 } 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "method": "nvmf_create_transport", 00:04:21.765 "params": { 00:04:21.765 "trtype": "TCP", 00:04:21.765 "max_queue_depth": 128, 00:04:21.765 "max_io_qpairs_per_ctrlr": 127, 00:04:21.765 "in_capsule_data_size": 4096, 00:04:21.765 "max_io_size": 131072, 00:04:21.765 "io_unit_size": 131072, 00:04:21.765 "max_aq_depth": 128, 00:04:21.765 "num_shared_buffers": 511, 00:04:21.765 "buf_cache_size": 4294967295, 00:04:21.765 "dif_insert_or_strip": false, 00:04:21.765 "zcopy": false, 00:04:21.765 "c2h_success": true, 00:04:21.765 "sock_priority": 0, 00:04:21.765 "abort_timeout_sec": 1, 00:04:21.765 "ack_timeout": 0, 00:04:21.765 "data_wr_pool_size": 0 00:04:21.765 } 00:04:21.765 } 00:04:21.765 ] 00:04:21.765 }, 00:04:21.765 { 00:04:21.765 "subsystem": "iscsi", 00:04:21.765 "config": [ 00:04:21.765 { 00:04:21.765 "method": "iscsi_set_options", 00:04:21.765 "params": { 00:04:21.765 "node_base": "iqn.2016-06.io.spdk", 00:04:21.765 "max_sessions": 128, 00:04:21.765 "max_connections_per_session": 2, 00:04:21.765 "max_queue_depth": 64, 00:04:21.765 "default_time2wait": 2, 00:04:21.765 "default_time2retain": 20, 00:04:21.765 "first_burst_length": 8192, 00:04:21.765 "immediate_data": true, 00:04:21.765 "allow_duplicated_isid": false, 00:04:21.765 "error_recovery_level": 0, 00:04:21.765 "nop_timeout": 60, 00:04:21.765 "nop_in_interval": 30, 00:04:21.765 "disable_chap": false, 00:04:21.765 "require_chap": false, 00:04:21.765 "mutual_chap": false, 00:04:21.765 "chap_group": 0, 00:04:21.765 "max_large_datain_per_connection": 64, 00:04:21.765 "max_r2t_per_connection": 4, 00:04:21.765 "pdu_pool_size": 36864, 00:04:21.765 "immediate_data_pool_size": 16384, 00:04:21.765 "data_out_pool_size": 2048 00:04:21.765 } 00:04:21.765 } 00:04:21.765 ] 00:04:21.765 } 00:04:21.765 ] 00:04:21.765 } 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57460 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57460 ']' 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57460 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57460 00:04:21.765 killing process with pid 57460 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57460' 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57460 00:04:21.765 09:36:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57460 00:04:23.149 09:36:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57494 00:04:23.149 09:36:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:23.150 09:36:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57494 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57494 ']' 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57494 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57494 00:04:28.424 killing process with pid 57494 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57494' 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57494 00:04:28.424 09:36:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57494 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.359 ************************************ 00:04:29.359 END TEST skip_rpc_with_json 00:04:29.359 ************************************ 00:04:29.359 00:04:29.359 real 0m8.460s 00:04:29.359 user 0m8.120s 00:04:29.359 sys 0m0.552s 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.359 09:36:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:29.359 09:36:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.359 09:36:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.359 09:36:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.359 ************************************ 00:04:29.359 START TEST skip_rpc_with_delay 00:04:29.359 ************************************ 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.359 [2024-12-05 09:36:16.777302] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.359 00:04:29.359 real 0m0.125s 00:04:29.359 user 0m0.069s 00:04:29.359 sys 0m0.054s 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.359 09:36:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:29.359 ************************************ 00:04:29.359 END TEST skip_rpc_with_delay 00:04:29.359 ************************************ 00:04:29.359 09:36:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:29.359 09:36:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:29.359 09:36:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:29.359 09:36:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.359 09:36:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.359 09:36:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.359 ************************************ 00:04:29.359 START TEST exit_on_failed_rpc_init 00:04:29.359 ************************************ 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:29.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57617 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57617 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57617 ']' 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.359 09:36:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.359 [2024-12-05 09:36:16.949954] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:29.359 [2024-12-05 09:36:16.950255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57617 ] 00:04:29.616 [2024-12-05 09:36:17.104822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.616 [2024-12-05 09:36:17.180672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:30.183 09:36:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:30.440 [2024-12-05 09:36:17.858694] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:30.440 [2024-12-05 09:36:17.858925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57629 ] 00:04:30.440 [2024-12-05 09:36:18.018813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.699 [2024-12-05 09:36:18.111280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.699 [2024-12-05 09:36:18.111362] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:30.699 [2024-12-05 09:36:18.111380] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:30.699 [2024-12-05 09:36:18.111397] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57617 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57617 ']' 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57617 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57617 00:04:30.699 killing process with pid 57617 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57617' 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57617 00:04:30.699 09:36:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57617 00:04:32.147 ************************************ 00:04:32.147 END TEST exit_on_failed_rpc_init 00:04:32.147 ************************************ 00:04:32.147 00:04:32.147 real 0m2.610s 00:04:32.147 user 0m2.933s 00:04:32.147 sys 0m0.380s 00:04:32.147 09:36:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.147 09:36:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.147 09:36:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.147 00:04:32.147 real 0m17.717s 00:04:32.147 user 0m17.077s 00:04:32.147 sys 0m1.430s 00:04:32.147 09:36:19 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.147 09:36:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.147 ************************************ 00:04:32.147 END TEST skip_rpc 00:04:32.147 ************************************ 00:04:32.147 09:36:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:32.147 09:36:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.147 09:36:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.147 09:36:19 -- common/autotest_common.sh@10 -- # set +x 00:04:32.147 ************************************ 00:04:32.147 START TEST rpc_client 00:04:32.147 ************************************ 00:04:32.147 09:36:19 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:32.147 * Looking for test storage... 00:04:32.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:32.147 09:36:19 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.148 09:36:19 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.148 --rc genhtml_branch_coverage=1 00:04:32.148 --rc genhtml_function_coverage=1 00:04:32.148 --rc genhtml_legend=1 00:04:32.148 --rc geninfo_all_blocks=1 00:04:32.148 --rc geninfo_unexecuted_blocks=1 00:04:32.148 00:04:32.148 ' 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.148 --rc genhtml_branch_coverage=1 00:04:32.148 --rc genhtml_function_coverage=1 00:04:32.148 --rc genhtml_legend=1 00:04:32.148 --rc geninfo_all_blocks=1 00:04:32.148 --rc geninfo_unexecuted_blocks=1 00:04:32.148 00:04:32.148 ' 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.148 --rc genhtml_branch_coverage=1 00:04:32.148 --rc genhtml_function_coverage=1 00:04:32.148 --rc genhtml_legend=1 00:04:32.148 --rc geninfo_all_blocks=1 00:04:32.148 --rc geninfo_unexecuted_blocks=1 00:04:32.148 00:04:32.148 ' 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.148 --rc genhtml_branch_coverage=1 00:04:32.148 --rc genhtml_function_coverage=1 00:04:32.148 --rc genhtml_legend=1 00:04:32.148 --rc geninfo_all_blocks=1 00:04:32.148 --rc geninfo_unexecuted_blocks=1 00:04:32.148 00:04:32.148 ' 00:04:32.148 09:36:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:32.148 OK 00:04:32.148 09:36:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:32.148 00:04:32.148 real 0m0.179s 00:04:32.148 user 0m0.115s 00:04:32.148 sys 0m0.070s 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.148 ************************************ 00:04:32.148 END TEST rpc_client 00:04:32.148 ************************************ 00:04:32.148 09:36:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:32.148 09:36:19 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:32.148 09:36:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.148 09:36:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.148 09:36:19 -- common/autotest_common.sh@10 -- # set +x 00:04:32.148 ************************************ 00:04:32.148 START TEST json_config 00:04:32.148 ************************************ 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.410 09:36:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.410 09:36:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.410 09:36:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.410 09:36:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.410 09:36:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.410 09:36:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.410 09:36:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.410 09:36:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.410 09:36:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.410 09:36:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.410 09:36:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.410 09:36:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:32.410 09:36:19 json_config -- scripts/common.sh@345 -- # : 1 00:04:32.410 09:36:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.410 09:36:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.410 09:36:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:32.410 09:36:19 json_config -- scripts/common.sh@353 -- # local d=1 00:04:32.410 09:36:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.410 09:36:19 json_config -- scripts/common.sh@355 -- # echo 1 00:04:32.410 09:36:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.410 09:36:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:32.410 09:36:19 json_config -- scripts/common.sh@353 -- # local d=2 00:04:32.410 09:36:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.410 09:36:19 json_config -- scripts/common.sh@355 -- # echo 2 00:04:32.410 09:36:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.410 09:36:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.410 09:36:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.410 09:36:19 json_config -- scripts/common.sh@368 -- # return 0 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.410 --rc genhtml_branch_coverage=1 00:04:32.410 --rc genhtml_function_coverage=1 00:04:32.410 --rc genhtml_legend=1 00:04:32.410 --rc geninfo_all_blocks=1 00:04:32.410 --rc geninfo_unexecuted_blocks=1 00:04:32.410 00:04:32.410 ' 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.410 --rc genhtml_branch_coverage=1 00:04:32.410 --rc genhtml_function_coverage=1 00:04:32.410 --rc genhtml_legend=1 00:04:32.410 --rc geninfo_all_blocks=1 00:04:32.410 --rc geninfo_unexecuted_blocks=1 00:04:32.410 00:04:32.410 ' 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.410 --rc genhtml_branch_coverage=1 00:04:32.410 --rc genhtml_function_coverage=1 00:04:32.410 --rc genhtml_legend=1 00:04:32.410 --rc geninfo_all_blocks=1 00:04:32.410 --rc geninfo_unexecuted_blocks=1 00:04:32.410 00:04:32.410 ' 00:04:32.410 09:36:19 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.410 --rc genhtml_branch_coverage=1 00:04:32.410 --rc genhtml_function_coverage=1 00:04:32.410 --rc genhtml_legend=1 00:04:32.410 --rc geninfo_all_blocks=1 00:04:32.410 --rc geninfo_unexecuted_blocks=1 00:04:32.410 00:04:32.410 ' 00:04:32.410 09:36:19 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:415bc8b4-eaf2-4ed5-80fd-e40086c58160 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=415bc8b4-eaf2-4ed5-80fd-e40086c58160 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.410 09:36:19 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.410 09:36:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.410 09:36:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.410 09:36:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.410 09:36:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.410 09:36:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.411 09:36:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.411 09:36:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.411 09:36:19 json_config -- paths/export.sh@5 -- # export PATH 00:04:32.411 09:36:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@51 -- # : 0 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.411 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.411 09:36:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.411 09:36:19 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.411 09:36:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:32.411 09:36:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:32.411 09:36:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:32.411 09:36:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:32.411 09:36:19 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:32.411 WARNING: No tests are enabled so not running JSON configuration tests 00:04:32.411 09:36:19 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:32.411 00:04:32.411 real 0m0.136s 00:04:32.411 user 0m0.088s 00:04:32.411 sys 0m0.047s 00:04:32.411 09:36:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.411 09:36:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.411 ************************************ 00:04:32.411 END TEST json_config 00:04:32.411 ************************************ 00:04:32.411 09:36:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.411 09:36:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.411 09:36:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.411 09:36:19 -- common/autotest_common.sh@10 -- # set +x 00:04:32.411 ************************************ 00:04:32.411 START TEST json_config_extra_key 00:04:32.411 ************************************ 00:04:32.411 09:36:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.411 09:36:19 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.411 09:36:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.411 09:36:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.672 09:36:20 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.672 09:36:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:32.673 09:36:20 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.673 09:36:20 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.673 --rc genhtml_branch_coverage=1 00:04:32.673 --rc genhtml_function_coverage=1 00:04:32.673 --rc genhtml_legend=1 00:04:32.673 --rc geninfo_all_blocks=1 00:04:32.673 --rc geninfo_unexecuted_blocks=1 00:04:32.673 00:04:32.673 ' 00:04:32.673 09:36:20 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.673 --rc genhtml_branch_coverage=1 00:04:32.673 --rc genhtml_function_coverage=1 00:04:32.673 --rc genhtml_legend=1 00:04:32.673 --rc geninfo_all_blocks=1 00:04:32.673 --rc geninfo_unexecuted_blocks=1 00:04:32.673 00:04:32.673 ' 00:04:32.673 09:36:20 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.673 --rc genhtml_branch_coverage=1 00:04:32.673 --rc genhtml_function_coverage=1 00:04:32.673 --rc genhtml_legend=1 00:04:32.673 --rc geninfo_all_blocks=1 00:04:32.673 --rc geninfo_unexecuted_blocks=1 00:04:32.673 00:04:32.673 ' 00:04:32.673 09:36:20 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.673 --rc genhtml_branch_coverage=1 00:04:32.673 --rc genhtml_function_coverage=1 00:04:32.673 --rc genhtml_legend=1 00:04:32.673 --rc geninfo_all_blocks=1 00:04:32.673 --rc geninfo_unexecuted_blocks=1 00:04:32.673 00:04:32.673 ' 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:415bc8b4-eaf2-4ed5-80fd-e40086c58160 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=415bc8b4-eaf2-4ed5-80fd-e40086c58160 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.673 09:36:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.673 09:36:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.673 09:36:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.673 09:36:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.673 09:36:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.673 09:36:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.673 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.673 09:36:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.673 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.674 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.674 INFO: launching applications... 00:04:32.674 09:36:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57823 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.674 Waiting for target to run... 00:04:32.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.674 09:36:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57823 /var/tmp/spdk_tgt.sock 00:04:32.674 09:36:20 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57823 ']' 00:04:32.674 09:36:20 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.674 09:36:20 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.674 09:36:20 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.674 09:36:20 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.674 09:36:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.674 [2024-12-05 09:36:20.183923] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:32.674 [2024-12-05 09:36:20.184181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57823 ] 00:04:32.935 [2024-12-05 09:36:20.498638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.196 [2024-12-05 09:36:20.598928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.767 09:36:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.767 00:04:33.767 INFO: shutting down applications... 00:04:33.767 09:36:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.767 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.767 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57823 ]] 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57823 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57823 00:04:33.767 09:36:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.028 09:36:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.028 09:36:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.028 09:36:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57823 00:04:34.028 09:36:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.598 09:36:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.598 09:36:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.598 09:36:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57823 00:04:34.598 09:36:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.164 09:36:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.164 09:36:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.164 09:36:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57823 00:04:35.164 09:36:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.731 09:36:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.731 09:36:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.731 SPDK target shutdown done 00:04:35.731 Success 00:04:35.731 09:36:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57823 00:04:35.731 09:36:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:35.731 09:36:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:35.731 09:36:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:35.731 09:36:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:35.731 09:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:35.731 ************************************ 00:04:35.731 END TEST json_config_extra_key 00:04:35.731 ************************************ 00:04:35.731 00:04:35.731 real 0m3.182s 00:04:35.731 user 0m2.716s 00:04:35.731 sys 0m0.371s 00:04:35.731 09:36:23 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.731 09:36:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.731 09:36:23 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.731 09:36:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.731 09:36:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.731 09:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:35.731 ************************************ 00:04:35.731 START TEST alias_rpc 00:04:35.731 ************************************ 00:04:35.731 09:36:23 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.731 * Looking for test storage... 00:04:35.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:35.731 09:36:23 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.731 09:36:23 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.731 09:36:23 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.731 09:36:23 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.731 09:36:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:35.731 09:36:23 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.731 09:36:23 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.731 --rc genhtml_branch_coverage=1 00:04:35.731 --rc genhtml_function_coverage=1 00:04:35.731 --rc genhtml_legend=1 00:04:35.731 --rc geninfo_all_blocks=1 00:04:35.731 --rc geninfo_unexecuted_blocks=1 00:04:35.731 00:04:35.731 ' 00:04:35.731 09:36:23 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.731 --rc genhtml_branch_coverage=1 00:04:35.731 --rc genhtml_function_coverage=1 00:04:35.732 --rc genhtml_legend=1 00:04:35.732 --rc geninfo_all_blocks=1 00:04:35.732 --rc geninfo_unexecuted_blocks=1 00:04:35.732 00:04:35.732 ' 00:04:35.732 09:36:23 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.732 --rc genhtml_branch_coverage=1 00:04:35.732 --rc genhtml_function_coverage=1 00:04:35.732 --rc genhtml_legend=1 00:04:35.732 --rc geninfo_all_blocks=1 00:04:35.732 --rc geninfo_unexecuted_blocks=1 00:04:35.732 00:04:35.732 ' 00:04:35.732 09:36:23 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.732 --rc genhtml_branch_coverage=1 00:04:35.732 --rc genhtml_function_coverage=1 00:04:35.732 --rc genhtml_legend=1 00:04:35.732 --rc geninfo_all_blocks=1 00:04:35.732 --rc geninfo_unexecuted_blocks=1 00:04:35.732 00:04:35.732 ' 00:04:35.732 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:35.732 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57916 00:04:35.732 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57916 00:04:35.732 09:36:23 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57916 ']' 00:04:35.732 09:36:23 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.732 09:36:23 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.732 09:36:23 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.732 09:36:23 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.732 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.732 09:36:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.990 [2024-12-05 09:36:23.363691] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:35.990 [2024-12-05 09:36:23.364306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:04:35.990 [2024-12-05 09:36:23.526609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.247 [2024-12-05 09:36:23.623319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.814 09:36:24 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.815 09:36:24 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:36.815 09:36:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:36.815 09:36:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57916 00:04:36.815 09:36:24 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57916 ']' 00:04:36.815 09:36:24 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57916 00:04:36.815 09:36:24 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.815 09:36:24 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.815 09:36:24 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57916 00:04:37.074 09:36:24 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.074 killing process with pid 57916 00:04:37.074 09:36:24 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.074 09:36:24 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57916' 00:04:37.074 09:36:24 alias_rpc -- common/autotest_common.sh@973 -- # kill 57916 00:04:37.074 09:36:24 alias_rpc -- common/autotest_common.sh@978 -- # wait 57916 00:04:38.447 ************************************ 00:04:38.447 END TEST alias_rpc 00:04:38.447 ************************************ 00:04:38.447 00:04:38.447 real 0m2.662s 00:04:38.447 user 0m2.765s 00:04:38.447 sys 0m0.402s 00:04:38.447 09:36:25 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.447 09:36:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.447 09:36:25 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:38.447 09:36:25 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:38.447 09:36:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.447 09:36:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.447 09:36:25 -- common/autotest_common.sh@10 -- # set +x 00:04:38.447 ************************************ 00:04:38.447 START TEST spdkcli_tcp 00:04:38.447 ************************************ 00:04:38.447 09:36:25 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:38.447 * Looking for test storage... 00:04:38.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:38.447 09:36:25 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.447 09:36:25 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.447 09:36:25 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.447 09:36:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.447 --rc genhtml_branch_coverage=1 00:04:38.447 --rc genhtml_function_coverage=1 00:04:38.447 --rc genhtml_legend=1 00:04:38.447 --rc geninfo_all_blocks=1 00:04:38.447 --rc geninfo_unexecuted_blocks=1 00:04:38.447 00:04:38.447 ' 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.447 --rc genhtml_branch_coverage=1 00:04:38.447 --rc genhtml_function_coverage=1 00:04:38.447 --rc genhtml_legend=1 00:04:38.447 --rc geninfo_all_blocks=1 00:04:38.447 --rc geninfo_unexecuted_blocks=1 00:04:38.447 00:04:38.447 ' 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.447 --rc genhtml_branch_coverage=1 00:04:38.447 --rc genhtml_function_coverage=1 00:04:38.447 --rc genhtml_legend=1 00:04:38.447 --rc geninfo_all_blocks=1 00:04:38.447 --rc geninfo_unexecuted_blocks=1 00:04:38.447 00:04:38.447 ' 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.447 --rc genhtml_branch_coverage=1 00:04:38.447 --rc genhtml_function_coverage=1 00:04:38.447 --rc genhtml_legend=1 00:04:38.447 --rc geninfo_all_blocks=1 00:04:38.447 --rc geninfo_unexecuted_blocks=1 00:04:38.447 00:04:38.447 ' 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58012 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:38.447 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58012 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58012 ']' 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.447 09:36:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.705 [2024-12-05 09:36:26.108604] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:38.705 [2024-12-05 09:36:26.108722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58012 ] 00:04:38.705 [2024-12-05 09:36:26.265847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.963 [2024-12-05 09:36:26.344841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.963 [2024-12-05 09:36:26.344958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.529 09:36:26 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.529 09:36:26 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:39.529 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:39.529 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58023 00:04:39.529 09:36:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:39.529 [ 00:04:39.529 "bdev_malloc_delete", 00:04:39.529 "bdev_malloc_create", 00:04:39.529 "bdev_null_resize", 00:04:39.529 "bdev_null_delete", 00:04:39.529 "bdev_null_create", 00:04:39.529 "bdev_nvme_cuse_unregister", 00:04:39.529 "bdev_nvme_cuse_register", 00:04:39.529 "bdev_opal_new_user", 00:04:39.529 "bdev_opal_set_lock_state", 00:04:39.529 "bdev_opal_delete", 00:04:39.529 "bdev_opal_get_info", 00:04:39.529 "bdev_opal_create", 00:04:39.529 "bdev_nvme_opal_revert", 00:04:39.529 "bdev_nvme_opal_init", 00:04:39.529 "bdev_nvme_send_cmd", 00:04:39.529 "bdev_nvme_set_keys", 00:04:39.529 "bdev_nvme_get_path_iostat", 00:04:39.529 "bdev_nvme_get_mdns_discovery_info", 00:04:39.529 "bdev_nvme_stop_mdns_discovery", 00:04:39.529 "bdev_nvme_start_mdns_discovery", 00:04:39.529 "bdev_nvme_set_multipath_policy", 00:04:39.529 "bdev_nvme_set_preferred_path", 00:04:39.529 "bdev_nvme_get_io_paths", 00:04:39.529 "bdev_nvme_remove_error_injection", 00:04:39.529 "bdev_nvme_add_error_injection", 00:04:39.529 "bdev_nvme_get_discovery_info", 00:04:39.529 "bdev_nvme_stop_discovery", 00:04:39.529 "bdev_nvme_start_discovery", 00:04:39.529 "bdev_nvme_get_controller_health_info", 00:04:39.529 "bdev_nvme_disable_controller", 00:04:39.529 "bdev_nvme_enable_controller", 00:04:39.529 "bdev_nvme_reset_controller", 00:04:39.529 "bdev_nvme_get_transport_statistics", 00:04:39.529 "bdev_nvme_apply_firmware", 00:04:39.529 "bdev_nvme_detach_controller", 00:04:39.530 "bdev_nvme_get_controllers", 00:04:39.530 "bdev_nvme_attach_controller", 00:04:39.530 "bdev_nvme_set_hotplug", 00:04:39.530 "bdev_nvme_set_options", 00:04:39.530 "bdev_passthru_delete", 00:04:39.530 "bdev_passthru_create", 00:04:39.530 "bdev_lvol_set_parent_bdev", 00:04:39.530 "bdev_lvol_set_parent", 00:04:39.530 "bdev_lvol_check_shallow_copy", 00:04:39.530 "bdev_lvol_start_shallow_copy", 00:04:39.530 "bdev_lvol_grow_lvstore", 00:04:39.530 "bdev_lvol_get_lvols", 00:04:39.530 "bdev_lvol_get_lvstores", 00:04:39.530 "bdev_lvol_delete", 00:04:39.530 "bdev_lvol_set_read_only", 00:04:39.530 "bdev_lvol_resize", 00:04:39.530 "bdev_lvol_decouple_parent", 00:04:39.530 "bdev_lvol_inflate", 00:04:39.530 "bdev_lvol_rename", 00:04:39.530 "bdev_lvol_clone_bdev", 00:04:39.530 "bdev_lvol_clone", 00:04:39.530 "bdev_lvol_snapshot", 00:04:39.530 "bdev_lvol_create", 00:04:39.530 "bdev_lvol_delete_lvstore", 00:04:39.530 "bdev_lvol_rename_lvstore", 00:04:39.530 "bdev_lvol_create_lvstore", 00:04:39.530 "bdev_raid_set_options", 00:04:39.530 "bdev_raid_remove_base_bdev", 00:04:39.530 "bdev_raid_add_base_bdev", 00:04:39.530 "bdev_raid_delete", 00:04:39.530 "bdev_raid_create", 00:04:39.530 "bdev_raid_get_bdevs", 00:04:39.530 "bdev_error_inject_error", 00:04:39.530 "bdev_error_delete", 00:04:39.530 "bdev_error_create", 00:04:39.530 "bdev_split_delete", 00:04:39.530 "bdev_split_create", 00:04:39.530 "bdev_delay_delete", 00:04:39.530 "bdev_delay_create", 00:04:39.530 "bdev_delay_update_latency", 00:04:39.530 "bdev_zone_block_delete", 00:04:39.530 "bdev_zone_block_create", 00:04:39.530 "blobfs_create", 00:04:39.530 "blobfs_detect", 00:04:39.530 "blobfs_set_cache_size", 00:04:39.530 "bdev_xnvme_delete", 00:04:39.530 "bdev_xnvme_create", 00:04:39.530 "bdev_aio_delete", 00:04:39.530 "bdev_aio_rescan", 00:04:39.530 "bdev_aio_create", 00:04:39.530 "bdev_ftl_set_property", 00:04:39.530 "bdev_ftl_get_properties", 00:04:39.530 "bdev_ftl_get_stats", 00:04:39.530 "bdev_ftl_unmap", 00:04:39.530 "bdev_ftl_unload", 00:04:39.530 "bdev_ftl_delete", 00:04:39.530 "bdev_ftl_load", 00:04:39.530 "bdev_ftl_create", 00:04:39.530 "bdev_virtio_attach_controller", 00:04:39.530 "bdev_virtio_scsi_get_devices", 00:04:39.530 "bdev_virtio_detach_controller", 00:04:39.530 "bdev_virtio_blk_set_hotplug", 00:04:39.530 "bdev_iscsi_delete", 00:04:39.530 "bdev_iscsi_create", 00:04:39.530 "bdev_iscsi_set_options", 00:04:39.530 "accel_error_inject_error", 00:04:39.530 "ioat_scan_accel_module", 00:04:39.530 "dsa_scan_accel_module", 00:04:39.530 "iaa_scan_accel_module", 00:04:39.530 "keyring_file_remove_key", 00:04:39.530 "keyring_file_add_key", 00:04:39.530 "keyring_linux_set_options", 00:04:39.530 "fsdev_aio_delete", 00:04:39.530 "fsdev_aio_create", 00:04:39.530 "iscsi_get_histogram", 00:04:39.530 "iscsi_enable_histogram", 00:04:39.530 "iscsi_set_options", 00:04:39.530 "iscsi_get_auth_groups", 00:04:39.530 "iscsi_auth_group_remove_secret", 00:04:39.530 "iscsi_auth_group_add_secret", 00:04:39.530 "iscsi_delete_auth_group", 00:04:39.530 "iscsi_create_auth_group", 00:04:39.530 "iscsi_set_discovery_auth", 00:04:39.530 "iscsi_get_options", 00:04:39.530 "iscsi_target_node_request_logout", 00:04:39.530 "iscsi_target_node_set_redirect", 00:04:39.530 "iscsi_target_node_set_auth", 00:04:39.530 "iscsi_target_node_add_lun", 00:04:39.530 "iscsi_get_stats", 00:04:39.530 "iscsi_get_connections", 00:04:39.530 "iscsi_portal_group_set_auth", 00:04:39.530 "iscsi_start_portal_group", 00:04:39.530 "iscsi_delete_portal_group", 00:04:39.530 "iscsi_create_portal_group", 00:04:39.530 "iscsi_get_portal_groups", 00:04:39.530 "iscsi_delete_target_node", 00:04:39.530 "iscsi_target_node_remove_pg_ig_maps", 00:04:39.530 "iscsi_target_node_add_pg_ig_maps", 00:04:39.530 "iscsi_create_target_node", 00:04:39.530 "iscsi_get_target_nodes", 00:04:39.530 "iscsi_delete_initiator_group", 00:04:39.530 "iscsi_initiator_group_remove_initiators", 00:04:39.530 "iscsi_initiator_group_add_initiators", 00:04:39.530 "iscsi_create_initiator_group", 00:04:39.530 "iscsi_get_initiator_groups", 00:04:39.530 "nvmf_set_crdt", 00:04:39.530 "nvmf_set_config", 00:04:39.530 "nvmf_set_max_subsystems", 00:04:39.530 "nvmf_stop_mdns_prr", 00:04:39.530 "nvmf_publish_mdns_prr", 00:04:39.530 "nvmf_subsystem_get_listeners", 00:04:39.530 "nvmf_subsystem_get_qpairs", 00:04:39.530 "nvmf_subsystem_get_controllers", 00:04:39.530 "nvmf_get_stats", 00:04:39.530 "nvmf_get_transports", 00:04:39.530 "nvmf_create_transport", 00:04:39.530 "nvmf_get_targets", 00:04:39.530 "nvmf_delete_target", 00:04:39.530 "nvmf_create_target", 00:04:39.530 "nvmf_subsystem_allow_any_host", 00:04:39.530 "nvmf_subsystem_set_keys", 00:04:39.530 "nvmf_subsystem_remove_host", 00:04:39.530 "nvmf_subsystem_add_host", 00:04:39.530 "nvmf_ns_remove_host", 00:04:39.530 "nvmf_ns_add_host", 00:04:39.530 "nvmf_subsystem_remove_ns", 00:04:39.530 "nvmf_subsystem_set_ns_ana_group", 00:04:39.530 "nvmf_subsystem_add_ns", 00:04:39.530 "nvmf_subsystem_listener_set_ana_state", 00:04:39.530 "nvmf_discovery_get_referrals", 00:04:39.530 "nvmf_discovery_remove_referral", 00:04:39.530 "nvmf_discovery_add_referral", 00:04:39.530 "nvmf_subsystem_remove_listener", 00:04:39.530 "nvmf_subsystem_add_listener", 00:04:39.530 "nvmf_delete_subsystem", 00:04:39.530 "nvmf_create_subsystem", 00:04:39.530 "nvmf_get_subsystems", 00:04:39.530 "env_dpdk_get_mem_stats", 00:04:39.530 "nbd_get_disks", 00:04:39.530 "nbd_stop_disk", 00:04:39.530 "nbd_start_disk", 00:04:39.530 "ublk_recover_disk", 00:04:39.530 "ublk_get_disks", 00:04:39.530 "ublk_stop_disk", 00:04:39.530 "ublk_start_disk", 00:04:39.530 "ublk_destroy_target", 00:04:39.530 "ublk_create_target", 00:04:39.530 "virtio_blk_create_transport", 00:04:39.530 "virtio_blk_get_transports", 00:04:39.530 "vhost_controller_set_coalescing", 00:04:39.530 "vhost_get_controllers", 00:04:39.530 "vhost_delete_controller", 00:04:39.530 "vhost_create_blk_controller", 00:04:39.530 "vhost_scsi_controller_remove_target", 00:04:39.530 "vhost_scsi_controller_add_target", 00:04:39.530 "vhost_start_scsi_controller", 00:04:39.530 "vhost_create_scsi_controller", 00:04:39.530 "thread_set_cpumask", 00:04:39.530 "scheduler_set_options", 00:04:39.530 "framework_get_governor", 00:04:39.530 "framework_get_scheduler", 00:04:39.530 "framework_set_scheduler", 00:04:39.530 "framework_get_reactors", 00:04:39.530 "thread_get_io_channels", 00:04:39.530 "thread_get_pollers", 00:04:39.530 "thread_get_stats", 00:04:39.530 "framework_monitor_context_switch", 00:04:39.530 "spdk_kill_instance", 00:04:39.530 "log_enable_timestamps", 00:04:39.530 "log_get_flags", 00:04:39.530 "log_clear_flag", 00:04:39.531 "log_set_flag", 00:04:39.531 "log_get_level", 00:04:39.531 "log_set_level", 00:04:39.531 "log_get_print_level", 00:04:39.531 "log_set_print_level", 00:04:39.531 "framework_enable_cpumask_locks", 00:04:39.531 "framework_disable_cpumask_locks", 00:04:39.531 "framework_wait_init", 00:04:39.531 "framework_start_init", 00:04:39.531 "scsi_get_devices", 00:04:39.531 "bdev_get_histogram", 00:04:39.531 "bdev_enable_histogram", 00:04:39.531 "bdev_set_qos_limit", 00:04:39.531 "bdev_set_qd_sampling_period", 00:04:39.531 "bdev_get_bdevs", 00:04:39.531 "bdev_reset_iostat", 00:04:39.531 "bdev_get_iostat", 00:04:39.531 "bdev_examine", 00:04:39.531 "bdev_wait_for_examine", 00:04:39.531 "bdev_set_options", 00:04:39.531 "accel_get_stats", 00:04:39.531 "accel_set_options", 00:04:39.531 "accel_set_driver", 00:04:39.531 "accel_crypto_key_destroy", 00:04:39.531 "accel_crypto_keys_get", 00:04:39.531 "accel_crypto_key_create", 00:04:39.531 "accel_assign_opc", 00:04:39.531 "accel_get_module_info", 00:04:39.531 "accel_get_opc_assignments", 00:04:39.531 "vmd_rescan", 00:04:39.531 "vmd_remove_device", 00:04:39.531 "vmd_enable", 00:04:39.531 "sock_get_default_impl", 00:04:39.531 "sock_set_default_impl", 00:04:39.531 "sock_impl_set_options", 00:04:39.531 "sock_impl_get_options", 00:04:39.531 "iobuf_get_stats", 00:04:39.531 "iobuf_set_options", 00:04:39.531 "keyring_get_keys", 00:04:39.531 "framework_get_pci_devices", 00:04:39.531 "framework_get_config", 00:04:39.531 "framework_get_subsystems", 00:04:39.531 "fsdev_set_opts", 00:04:39.531 "fsdev_get_opts", 00:04:39.531 "trace_get_info", 00:04:39.531 "trace_get_tpoint_group_mask", 00:04:39.531 "trace_disable_tpoint_group", 00:04:39.531 "trace_enable_tpoint_group", 00:04:39.531 "trace_clear_tpoint_mask", 00:04:39.531 "trace_set_tpoint_mask", 00:04:39.531 "notify_get_notifications", 00:04:39.531 "notify_get_types", 00:04:39.531 "spdk_get_version", 00:04:39.531 "rpc_get_methods" 00:04:39.531 ] 00:04:39.531 09:36:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:39.531 09:36:27 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.531 09:36:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.789 09:36:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:39.789 09:36:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58012 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58012 ']' 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58012 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58012 00:04:39.789 killing process with pid 58012 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58012' 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58012 00:04:39.789 09:36:27 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58012 00:04:41.164 00:04:41.164 real 0m2.485s 00:04:41.164 user 0m4.447s 00:04:41.164 sys 0m0.427s 00:04:41.164 09:36:28 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.164 09:36:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.164 ************************************ 00:04:41.164 END TEST spdkcli_tcp 00:04:41.164 ************************************ 00:04:41.164 09:36:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.164 09:36:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.164 09:36:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.164 09:36:28 -- common/autotest_common.sh@10 -- # set +x 00:04:41.164 ************************************ 00:04:41.164 START TEST dpdk_mem_utility 00:04:41.164 ************************************ 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.164 * Looking for test storage... 00:04:41.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.164 09:36:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.164 --rc genhtml_branch_coverage=1 00:04:41.164 --rc genhtml_function_coverage=1 00:04:41.164 --rc genhtml_legend=1 00:04:41.164 --rc geninfo_all_blocks=1 00:04:41.164 --rc geninfo_unexecuted_blocks=1 00:04:41.164 00:04:41.164 ' 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.164 --rc genhtml_branch_coverage=1 00:04:41.164 --rc genhtml_function_coverage=1 00:04:41.164 --rc genhtml_legend=1 00:04:41.164 --rc geninfo_all_blocks=1 00:04:41.164 --rc geninfo_unexecuted_blocks=1 00:04:41.164 00:04:41.164 ' 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.164 --rc genhtml_branch_coverage=1 00:04:41.164 --rc genhtml_function_coverage=1 00:04:41.164 --rc genhtml_legend=1 00:04:41.164 --rc geninfo_all_blocks=1 00:04:41.164 --rc geninfo_unexecuted_blocks=1 00:04:41.164 00:04:41.164 ' 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.164 --rc genhtml_branch_coverage=1 00:04:41.164 --rc genhtml_function_coverage=1 00:04:41.164 --rc genhtml_legend=1 00:04:41.164 --rc geninfo_all_blocks=1 00:04:41.164 --rc geninfo_unexecuted_blocks=1 00:04:41.164 00:04:41.164 ' 00:04:41.164 09:36:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:41.164 09:36:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58112 00:04:41.164 09:36:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58112 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58112 ']' 00:04:41.164 09:36:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.164 09:36:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.164 [2024-12-05 09:36:28.624751] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:41.164 [2024-12-05 09:36:28.624877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58112 ] 00:04:41.164 [2024-12-05 09:36:28.780603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.423 [2024-12-05 09:36:28.857977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.991 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.991 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:41.991 09:36:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:41.991 09:36:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:41.991 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.991 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.991 { 00:04:41.991 "filename": "/tmp/spdk_mem_dump.txt" 00:04:41.991 } 00:04:41.991 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.991 09:36:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:41.991 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:41.991 1 heaps totaling size 824.000000 MiB 00:04:41.991 size: 824.000000 MiB heap id: 0 00:04:41.991 end heaps---------- 00:04:41.991 9 mempools totaling size 603.782043 MiB 00:04:41.991 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:41.991 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:41.991 size: 100.555481 MiB name: bdev_io_58112 00:04:41.991 size: 50.003479 MiB name: msgpool_58112 00:04:41.991 size: 36.509338 MiB name: fsdev_io_58112 00:04:41.991 size: 21.763794 MiB name: PDU_Pool 00:04:41.991 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:41.991 size: 4.133484 MiB name: evtpool_58112 00:04:41.991 size: 0.026123 MiB name: Session_Pool 00:04:41.991 end mempools------- 00:04:41.991 6 memzones totaling size 4.142822 MiB 00:04:41.991 size: 1.000366 MiB name: RG_ring_0_58112 00:04:41.991 size: 1.000366 MiB name: RG_ring_1_58112 00:04:41.991 size: 1.000366 MiB name: RG_ring_4_58112 00:04:41.991 size: 1.000366 MiB name: RG_ring_5_58112 00:04:41.991 size: 0.125366 MiB name: RG_ring_2_58112 00:04:41.991 size: 0.015991 MiB name: RG_ring_3_58112 00:04:41.991 end memzones------- 00:04:41.991 09:36:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:41.991 heap id: 0 total size: 824.000000 MiB number of busy elements: 330 number of free elements: 18 00:04:41.991 list of free elements. size: 16.777710 MiB 00:04:41.991 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:41.991 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:41.991 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:41.991 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:41.991 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:41.991 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:41.991 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:41.991 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:41.991 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:41.991 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:41.991 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:41.991 element at address: 0x20001b400000 with size: 0.558777 MiB 00:04:41.991 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:41.991 element at address: 0x200019600000 with size: 0.488464 MiB 00:04:41.991 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:41.991 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:41.991 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:41.991 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:41.991 list of standard malloc elements. size: 199.291382 MiB 00:04:41.991 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:41.991 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:41.991 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:41.991 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:41.991 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:41.991 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:41.991 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:41.991 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:41.991 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:41.991 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:41.991 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:41.991 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:41.991 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:41.991 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:41.991 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:41.991 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f0c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:41.992 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:41.993 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:41.993 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:41.993 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:41.994 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:41.994 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:41.994 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:41.994 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:41.994 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:41.994 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:41.994 list of memzone associated elements. size: 607.930908 MiB 00:04:41.994 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:41.994 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:41.994 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:41.994 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:41.994 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:41.994 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58112_0 00:04:41.994 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:41.994 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58112_0 00:04:41.994 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:41.994 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58112_0 00:04:41.994 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:41.994 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:41.994 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:41.994 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:41.994 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:41.994 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58112_0 00:04:41.994 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:41.994 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58112 00:04:41.994 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:41.994 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58112 00:04:41.994 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:41.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:41.994 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:41.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:41.994 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:41.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:41.994 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:41.994 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:41.994 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:41.994 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58112 00:04:41.994 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:41.994 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58112 00:04:41.994 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:41.994 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58112 00:04:41.994 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:41.994 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58112 00:04:41.994 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:41.994 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58112 00:04:41.994 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:41.994 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58112 00:04:41.994 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:41.994 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:41.994 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:41.994 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:41.994 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:41.994 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:41.994 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:41.994 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58112 00:04:41.994 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:41.994 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58112 00:04:41.994 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:41.994 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:41.994 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:41.994 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:41.994 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:41.994 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58112 00:04:41.994 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:41.994 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:41.994 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:41.994 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58112 00:04:41.994 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:41.994 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58112 00:04:41.994 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:41.994 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58112 00:04:41.994 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:41.994 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:41.994 09:36:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:41.994 09:36:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58112 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58112 ']' 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58112 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58112 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58112' 00:04:41.994 killing process with pid 58112 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58112 00:04:41.994 09:36:29 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58112 00:04:43.366 00:04:43.366 real 0m2.358s 00:04:43.366 user 0m2.379s 00:04:43.366 sys 0m0.381s 00:04:43.366 09:36:30 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.366 09:36:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.366 ************************************ 00:04:43.366 END TEST dpdk_mem_utility 00:04:43.366 ************************************ 00:04:43.366 09:36:30 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.366 09:36:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.366 09:36:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.366 09:36:30 -- common/autotest_common.sh@10 -- # set +x 00:04:43.366 ************************************ 00:04:43.366 START TEST event 00:04:43.366 ************************************ 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.366 * Looking for test storage... 00:04:43.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.366 09:36:30 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.366 09:36:30 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.366 09:36:30 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.366 09:36:30 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.366 09:36:30 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.366 09:36:30 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.366 09:36:30 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.366 09:36:30 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.366 09:36:30 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.366 09:36:30 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.366 09:36:30 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.366 09:36:30 event -- scripts/common.sh@344 -- # case "$op" in 00:04:43.366 09:36:30 event -- scripts/common.sh@345 -- # : 1 00:04:43.366 09:36:30 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.366 09:36:30 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.366 09:36:30 event -- scripts/common.sh@365 -- # decimal 1 00:04:43.366 09:36:30 event -- scripts/common.sh@353 -- # local d=1 00:04:43.366 09:36:30 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.366 09:36:30 event -- scripts/common.sh@355 -- # echo 1 00:04:43.366 09:36:30 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.366 09:36:30 event -- scripts/common.sh@366 -- # decimal 2 00:04:43.366 09:36:30 event -- scripts/common.sh@353 -- # local d=2 00:04:43.366 09:36:30 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.366 09:36:30 event -- scripts/common.sh@355 -- # echo 2 00:04:43.366 09:36:30 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.366 09:36:30 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.366 09:36:30 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.366 09:36:30 event -- scripts/common.sh@368 -- # return 0 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.366 --rc genhtml_branch_coverage=1 00:04:43.366 --rc genhtml_function_coverage=1 00:04:43.366 --rc genhtml_legend=1 00:04:43.366 --rc geninfo_all_blocks=1 00:04:43.366 --rc geninfo_unexecuted_blocks=1 00:04:43.366 00:04:43.366 ' 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.366 --rc genhtml_branch_coverage=1 00:04:43.366 --rc genhtml_function_coverage=1 00:04:43.366 --rc genhtml_legend=1 00:04:43.366 --rc geninfo_all_blocks=1 00:04:43.366 --rc geninfo_unexecuted_blocks=1 00:04:43.366 00:04:43.366 ' 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.366 --rc genhtml_branch_coverage=1 00:04:43.366 --rc genhtml_function_coverage=1 00:04:43.366 --rc genhtml_legend=1 00:04:43.366 --rc geninfo_all_blocks=1 00:04:43.366 --rc geninfo_unexecuted_blocks=1 00:04:43.366 00:04:43.366 ' 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.366 --rc genhtml_branch_coverage=1 00:04:43.366 --rc genhtml_function_coverage=1 00:04:43.366 --rc genhtml_legend=1 00:04:43.366 --rc geninfo_all_blocks=1 00:04:43.366 --rc geninfo_unexecuted_blocks=1 00:04:43.366 00:04:43.366 ' 00:04:43.366 09:36:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:43.366 09:36:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.366 09:36:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:43.366 09:36:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.366 09:36:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.366 ************************************ 00:04:43.366 START TEST event_perf 00:04:43.366 ************************************ 00:04:43.366 09:36:30 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.366 Running I/O for 1 seconds...[2024-12-05 09:36:30.970640] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:43.366 [2024-12-05 09:36:30.970826] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58203 ] 00:04:43.623 [2024-12-05 09:36:31.126969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.623 [2024-12-05 09:36:31.212087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.623 [2024-12-05 09:36:31.212451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.623 [2024-12-05 09:36:31.212502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.623 Running I/O for 1 seconds...[2024-12-05 09:36:31.212560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.994 00:04:44.994 lcore 0: 201578 00:04:44.994 lcore 1: 201580 00:04:44.994 lcore 2: 201581 00:04:44.994 lcore 3: 201577 00:04:44.994 done. 00:04:44.994 ************************************ 00:04:44.994 END TEST event_perf 00:04:44.994 ************************************ 00:04:44.994 00:04:44.994 real 0m1.402s 00:04:44.994 user 0m4.200s 00:04:44.994 sys 0m0.085s 00:04:44.994 09:36:32 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.994 09:36:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.994 09:36:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.994 09:36:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:44.994 09:36:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.994 09:36:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.994 ************************************ 00:04:44.994 START TEST event_reactor 00:04:44.994 ************************************ 00:04:44.994 09:36:32 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.994 [2024-12-05 09:36:32.416268] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:44.994 [2024-12-05 09:36:32.416384] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58243 ] 00:04:44.994 [2024-12-05 09:36:32.570692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.250 [2024-12-05 09:36:32.646821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.182 test_start 00:04:46.182 oneshot 00:04:46.182 tick 100 00:04:46.182 tick 100 00:04:46.182 tick 250 00:04:46.182 tick 100 00:04:46.182 tick 100 00:04:46.182 tick 250 00:04:46.182 tick 100 00:04:46.182 tick 500 00:04:46.182 tick 100 00:04:46.182 tick 100 00:04:46.182 tick 250 00:04:46.182 tick 100 00:04:46.182 tick 100 00:04:46.182 test_end 00:04:46.182 ************************************ 00:04:46.182 END TEST event_reactor 00:04:46.182 ************************************ 00:04:46.182 00:04:46.182 real 0m1.382s 00:04:46.182 user 0m1.209s 00:04:46.182 sys 0m0.066s 00:04:46.182 09:36:33 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.182 09:36:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.182 09:36:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.182 09:36:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:46.182 09:36:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.182 09:36:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.440 ************************************ 00:04:46.440 START TEST event_reactor_perf 00:04:46.440 ************************************ 00:04:46.440 09:36:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.440 [2024-12-05 09:36:33.848886] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:46.441 [2024-12-05 09:36:33.849112] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58274 ] 00:04:46.441 [2024-12-05 09:36:34.004849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.699 [2024-12-05 09:36:34.080154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.641 test_start 00:04:47.641 test_end 00:04:47.641 Performance: 412879 events per second 00:04:47.641 00:04:47.641 real 0m1.381s 00:04:47.642 user 0m1.207s 00:04:47.642 sys 0m0.067s 00:04:47.642 ************************************ 00:04:47.642 END TEST event_reactor_perf 00:04:47.642 ************************************ 00:04:47.642 09:36:35 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.642 09:36:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.642 09:36:35 event -- event/event.sh@49 -- # uname -s 00:04:47.642 09:36:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.642 09:36:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.642 09:36:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.642 09:36:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.642 09:36:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.642 ************************************ 00:04:47.642 START TEST event_scheduler 00:04:47.642 ************************************ 00:04:47.642 09:36:35 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.900 * Looking for test storage... 00:04:47.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:47.900 09:36:35 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.900 09:36:35 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.900 09:36:35 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.900 09:36:35 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.900 09:36:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.900 09:36:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.900 09:36:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.900 09:36:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.901 09:36:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.901 --rc genhtml_branch_coverage=1 00:04:47.901 --rc genhtml_function_coverage=1 00:04:47.901 --rc genhtml_legend=1 00:04:47.901 --rc geninfo_all_blocks=1 00:04:47.901 --rc geninfo_unexecuted_blocks=1 00:04:47.901 00:04:47.901 ' 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.901 --rc genhtml_branch_coverage=1 00:04:47.901 --rc genhtml_function_coverage=1 00:04:47.901 --rc genhtml_legend=1 00:04:47.901 --rc geninfo_all_blocks=1 00:04:47.901 --rc geninfo_unexecuted_blocks=1 00:04:47.901 00:04:47.901 ' 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.901 --rc genhtml_branch_coverage=1 00:04:47.901 --rc genhtml_function_coverage=1 00:04:47.901 --rc genhtml_legend=1 00:04:47.901 --rc geninfo_all_blocks=1 00:04:47.901 --rc geninfo_unexecuted_blocks=1 00:04:47.901 00:04:47.901 ' 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.901 --rc genhtml_branch_coverage=1 00:04:47.901 --rc genhtml_function_coverage=1 00:04:47.901 --rc genhtml_legend=1 00:04:47.901 --rc geninfo_all_blocks=1 00:04:47.901 --rc geninfo_unexecuted_blocks=1 00:04:47.901 00:04:47.901 ' 00:04:47.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.901 09:36:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.901 09:36:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58350 00:04:47.901 09:36:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.901 09:36:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58350 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58350 ']' 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.901 09:36:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.901 09:36:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.901 [2024-12-05 09:36:35.476209] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:47.901 [2024-12-05 09:36:35.476345] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58350 ] 00:04:48.159 [2024-12-05 09:36:35.636433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.159 [2024-12-05 09:36:35.738016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.159 [2024-12-05 09:36:35.738297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.159 [2024-12-05 09:36:35.738530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.159 [2024-12-05 09:36:35.738563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.726 09:36:36 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.726 09:36:36 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:48.726 09:36:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.726 09:36:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.726 09:36:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.726 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.726 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.726 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.726 POWER: Cannot set governor of lcore 0 to performance 00:04:48.726 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.726 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.726 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.726 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.726 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:48.726 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:48.726 POWER: Unable to set Power Management Environment for lcore 0 00:04:48.726 [2024-12-05 09:36:36.280037] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:48.726 [2024-12-05 09:36:36.280141] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:48.726 [2024-12-05 09:36:36.280163] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.726 [2024-12-05 09:36:36.280221] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.726 [2024-12-05 09:36:36.280245] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.726 [2024-12-05 09:36:36.280266] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.726 09:36:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.726 09:36:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.726 09:36:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.726 09:36:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.985 [2024-12-05 09:36:36.509501] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.985 09:36:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.985 09:36:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.985 09:36:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.985 09:36:36 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.985 09:36:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.985 ************************************ 00:04:48.985 START TEST scheduler_create_thread 00:04:48.985 ************************************ 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.985 2 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.985 3 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.985 4 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.985 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 5 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 6 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 7 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 8 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 9 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.986 10 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.986 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.245 09:36:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.182 09:36:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.182 09:36:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:50.182 09:36:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:50.182 09:36:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.182 09:36:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.139 ************************************ 00:04:51.139 END TEST scheduler_create_thread 00:04:51.139 ************************************ 00:04:51.139 09:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.139 00:04:51.139 real 0m2.137s 00:04:51.139 user 0m0.015s 00:04:51.139 sys 0m0.007s 00:04:51.139 09:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.139 09:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.139 09:36:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:51.139 09:36:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58350 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58350 ']' 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58350 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58350 00:04:51.139 killing process with pid 58350 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58350' 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58350 00:04:51.139 09:36:38 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58350 00:04:51.706 [2024-12-05 09:36:39.143570] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:52.273 ************************************ 00:04:52.273 END TEST event_scheduler 00:04:52.273 ************************************ 00:04:52.273 00:04:52.273 real 0m4.457s 00:04:52.273 user 0m7.498s 00:04:52.273 sys 0m0.325s 00:04:52.273 09:36:39 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.273 09:36:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.273 09:36:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:52.273 09:36:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:52.273 09:36:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.273 09:36:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.273 09:36:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.273 ************************************ 00:04:52.273 START TEST app_repeat 00:04:52.273 ************************************ 00:04:52.273 09:36:39 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:52.273 Process app_repeat pid: 58445 00:04:52.273 spdk_app_start Round 0 00:04:52.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58445 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58445' 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58445 /var/tmp/spdk-nbd.sock 00:04:52.273 09:36:39 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:52.273 09:36:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58445 ']' 00:04:52.273 09:36:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.273 09:36:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.273 09:36:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.273 09:36:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.273 09:36:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.273 [2024-12-05 09:36:39.827488] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:52.273 [2024-12-05 09:36:39.827610] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58445 ] 00:04:52.531 [2024-12-05 09:36:39.984725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.532 [2024-12-05 09:36:40.097288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.532 [2024-12-05 09:36:40.097397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.098 09:36:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.098 09:36:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.098 09:36:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.357 Malloc0 00:04:53.357 09:36:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.616 Malloc1 00:04:53.616 09:36:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.616 09:36:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.875 /dev/nbd0 00:04:53.875 09:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.875 09:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.875 1+0 records in 00:04:53.875 1+0 records out 00:04:53.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173868 s, 23.6 MB/s 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.875 09:36:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.875 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.875 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.875 09:36:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.134 /dev/nbd1 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.134 1+0 records in 00:04:54.134 1+0 records out 00:04:54.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018403 s, 22.3 MB/s 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.134 09:36:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.134 { 00:04:54.134 "nbd_device": "/dev/nbd0", 00:04:54.134 "bdev_name": "Malloc0" 00:04:54.134 }, 00:04:54.134 { 00:04:54.134 "nbd_device": "/dev/nbd1", 00:04:54.134 "bdev_name": "Malloc1" 00:04:54.134 } 00:04:54.134 ]' 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.134 { 00:04:54.134 "nbd_device": "/dev/nbd0", 00:04:54.134 "bdev_name": "Malloc0" 00:04:54.134 }, 00:04:54.134 { 00:04:54.134 "nbd_device": "/dev/nbd1", 00:04:54.134 "bdev_name": "Malloc1" 00:04:54.134 } 00:04:54.134 ]' 00:04:54.134 09:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.393 /dev/nbd1' 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.393 /dev/nbd1' 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.393 256+0 records in 00:04:54.393 256+0 records out 00:04:54.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415219 s, 253 MB/s 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.393 256+0 records in 00:04:54.393 256+0 records out 00:04:54.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156331 s, 67.1 MB/s 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.393 256+0 records in 00:04:54.393 256+0 records out 00:04:54.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158814 s, 66.0 MB/s 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.393 09:36:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.653 09:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.912 09:36:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.912 09:36:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.171 09:36:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.738 [2024-12-05 09:36:43.340614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.996 [2024-12-05 09:36:43.410397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.996 [2024-12-05 09:36:43.410501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.996 [2024-12-05 09:36:43.511612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.996 [2024-12-05 09:36:43.511674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.530 spdk_app_start Round 1 00:04:58.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.530 09:36:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.530 09:36:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:58.530 09:36:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58445 /var/tmp/spdk-nbd.sock 00:04:58.530 09:36:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58445 ']' 00:04:58.530 09:36:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.530 09:36:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.530 09:36:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.530 09:36:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.530 09:36:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.530 09:36:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.530 09:36:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:58.530 09:36:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.788 Malloc0 00:04:58.788 09:36:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.047 Malloc1 00:04:59.047 09:36:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.047 09:36:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.047 /dev/nbd0 00:04:59.333 09:36:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.333 09:36:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.333 1+0 records in 00:04:59.333 1+0 records out 00:04:59.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053405 s, 7.7 MB/s 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.333 09:36:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.333 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.333 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.333 09:36:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.333 /dev/nbd1 00:04:59.333 09:36:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.617 09:36:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.617 1+0 records in 00:04:59.617 1+0 records out 00:04:59.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239945 s, 17.1 MB/s 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.617 09:36:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.617 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.617 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.617 09:36:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.617 09:36:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.617 09:36:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.617 { 00:04:59.617 "nbd_device": "/dev/nbd0", 00:04:59.617 "bdev_name": "Malloc0" 00:04:59.617 }, 00:04:59.617 { 00:04:59.617 "nbd_device": "/dev/nbd1", 00:04:59.617 "bdev_name": "Malloc1" 00:04:59.617 } 00:04:59.617 ]' 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.617 { 00:04:59.617 "nbd_device": "/dev/nbd0", 00:04:59.617 "bdev_name": "Malloc0" 00:04:59.617 }, 00:04:59.617 { 00:04:59.617 "nbd_device": "/dev/nbd1", 00:04:59.617 "bdev_name": "Malloc1" 00:04:59.617 } 00:04:59.617 ]' 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.617 /dev/nbd1' 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.617 /dev/nbd1' 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.617 256+0 records in 00:04:59.617 256+0 records out 00:04:59.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00840761 s, 125 MB/s 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.617 256+0 records in 00:04:59.617 256+0 records out 00:04:59.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165026 s, 63.5 MB/s 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.617 09:36:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.875 256+0 records in 00:04:59.875 256+0 records out 00:04:59.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209451 s, 50.1 MB/s 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.875 09:36:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.133 09:36:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.391 09:36:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.391 09:36:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.648 09:36:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.215 [2024-12-05 09:36:48.770369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.215 [2024-12-05 09:36:48.837393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.215 [2024-12-05 09:36:48.837393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.473 [2024-12-05 09:36:48.932692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.473 [2024-12-05 09:36:48.932752] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.006 09:36:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.006 spdk_app_start Round 2 00:05:04.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.006 09:36:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.006 09:36:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58445 /var/tmp/spdk-nbd.sock 00:05:04.006 09:36:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58445 ']' 00:05:04.006 09:36:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.006 09:36:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.006 09:36:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.006 09:36:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.006 09:36:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.006 09:36:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.006 09:36:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:04.006 09:36:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.265 Malloc0 00:05:04.265 09:36:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.265 Malloc1 00:05:04.524 09:36:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.524 09:36:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.524 /dev/nbd0 00:05:04.524 09:36:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.524 09:36:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.524 1+0 records in 00:05:04.524 1+0 records out 00:05:04.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450384 s, 9.1 MB/s 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.524 09:36:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.524 09:36:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.524 09:36:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.524 09:36:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.783 /dev/nbd1 00:05:04.783 09:36:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.783 09:36:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.783 1+0 records in 00:05:04.783 1+0 records out 00:05:04.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175471 s, 23.3 MB/s 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.783 09:36:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.783 09:36:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.783 09:36:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.783 09:36:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.783 09:36:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.783 09:36:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.041 { 00:05:05.041 "nbd_device": "/dev/nbd0", 00:05:05.041 "bdev_name": "Malloc0" 00:05:05.041 }, 00:05:05.041 { 00:05:05.041 "nbd_device": "/dev/nbd1", 00:05:05.041 "bdev_name": "Malloc1" 00:05:05.041 } 00:05:05.041 ]' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.041 { 00:05:05.041 "nbd_device": "/dev/nbd0", 00:05:05.041 "bdev_name": "Malloc0" 00:05:05.041 }, 00:05:05.041 { 00:05:05.041 "nbd_device": "/dev/nbd1", 00:05:05.041 "bdev_name": "Malloc1" 00:05:05.041 } 00:05:05.041 ]' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.041 /dev/nbd1' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.041 /dev/nbd1' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.041 256+0 records in 00:05:05.041 256+0 records out 00:05:05.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00787354 s, 133 MB/s 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.041 256+0 records in 00:05:05.041 256+0 records out 00:05:05.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126933 s, 82.6 MB/s 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.041 256+0 records in 00:05:05.041 256+0 records out 00:05:05.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155115 s, 67.6 MB/s 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.041 09:36:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.300 09:36:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.557 09:36:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.817 09:36:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.817 09:36:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.075 09:36:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.640 [2024-12-05 09:36:54.123091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.640 [2024-12-05 09:36:54.189980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.640 [2024-12-05 09:36:54.190080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.897 [2024-12-05 09:36:54.291673] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.897 [2024-12-05 09:36:54.291729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.471 09:36:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58445 /var/tmp/spdk-nbd.sock 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58445 ']' 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.471 09:36:56 event.app_repeat -- event/event.sh@39 -- # killprocess 58445 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58445 ']' 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58445 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58445 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.471 killing process with pid 58445 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58445' 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58445 00:05:09.471 09:36:56 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58445 00:05:09.731 spdk_app_start is called in Round 0. 00:05:09.731 Shutdown signal received, stop current app iteration 00:05:09.731 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:09.731 spdk_app_start is called in Round 1. 00:05:09.731 Shutdown signal received, stop current app iteration 00:05:09.731 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:09.731 spdk_app_start is called in Round 2. 00:05:09.731 Shutdown signal received, stop current app iteration 00:05:09.731 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:09.731 spdk_app_start is called in Round 3. 00:05:09.731 Shutdown signal received, stop current app iteration 00:05:09.731 09:36:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:09.731 09:36:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:09.731 00:05:09.731 real 0m17.562s 00:05:09.731 user 0m38.535s 00:05:09.731 sys 0m1.969s 00:05:09.731 ************************************ 00:05:09.731 END TEST app_repeat 00:05:09.731 ************************************ 00:05:09.731 09:36:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.731 09:36:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.991 09:36:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:09.991 09:36:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:09.991 09:36:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.991 09:36:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.991 09:36:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.991 ************************************ 00:05:09.991 START TEST cpu_locks 00:05:09.991 ************************************ 00:05:09.991 09:36:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:09.991 * Looking for test storage... 00:05:09.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:09.991 09:36:57 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.991 09:36:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.991 09:36:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.991 09:36:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:09.991 09:36:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.992 09:36:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:09.992 09:36:57 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.992 09:36:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.992 --rc genhtml_branch_coverage=1 00:05:09.992 --rc genhtml_function_coverage=1 00:05:09.992 --rc genhtml_legend=1 00:05:09.992 --rc geninfo_all_blocks=1 00:05:09.992 --rc geninfo_unexecuted_blocks=1 00:05:09.992 00:05:09.992 ' 00:05:09.992 09:36:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.992 --rc genhtml_branch_coverage=1 00:05:09.992 --rc genhtml_function_coverage=1 00:05:09.992 --rc genhtml_legend=1 00:05:09.992 --rc geninfo_all_blocks=1 00:05:09.992 --rc geninfo_unexecuted_blocks=1 00:05:09.992 00:05:09.992 ' 00:05:09.992 09:36:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.992 --rc genhtml_branch_coverage=1 00:05:09.992 --rc genhtml_function_coverage=1 00:05:09.992 --rc genhtml_legend=1 00:05:09.992 --rc geninfo_all_blocks=1 00:05:09.992 --rc geninfo_unexecuted_blocks=1 00:05:09.992 00:05:09.992 ' 00:05:09.992 09:36:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.992 --rc genhtml_branch_coverage=1 00:05:09.992 --rc genhtml_function_coverage=1 00:05:09.992 --rc genhtml_legend=1 00:05:09.992 --rc geninfo_all_blocks=1 00:05:09.992 --rc geninfo_unexecuted_blocks=1 00:05:09.992 00:05:09.992 ' 00:05:09.992 09:36:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:09.992 09:36:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:09.992 09:36:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:09.992 09:36:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:09.992 09:36:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.992 09:36:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.992 09:36:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.992 ************************************ 00:05:09.992 START TEST default_locks 00:05:09.992 ************************************ 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58870 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58870 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58870 ']' 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.992 09:36:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.252 [2024-12-05 09:36:57.623206] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:10.252 [2024-12-05 09:36:57.623295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58870 ] 00:05:10.252 [2024-12-05 09:36:57.779210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.512 [2024-12-05 09:36:57.880936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58870 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58870 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58870 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58870 ']' 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58870 00:05:11.081 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:11.341 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.341 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58870 00:05:11.341 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.341 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.341 killing process with pid 58870 00:05:11.341 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58870' 00:05:11.341 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58870 00:05:11.341 09:36:58 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58870 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58870 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58870 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58870 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58870 ']' 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.726 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58870) - No such process 00:05:12.726 ERROR: process (pid: 58870) is no longer running 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.726 00:05:12.726 real 0m2.477s 00:05:12.726 user 0m2.412s 00:05:12.726 sys 0m0.458s 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.726 ************************************ 00:05:12.726 09:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.726 END TEST default_locks 00:05:12.726 ************************************ 00:05:12.726 09:37:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.726 09:37:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.726 09:37:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.726 09:37:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.726 ************************************ 00:05:12.726 START TEST default_locks_via_rpc 00:05:12.726 ************************************ 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58934 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58934 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58934 ']' 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.726 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.726 [2024-12-05 09:37:00.144952] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:12.726 [2024-12-05 09:37:00.145072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58934 ] 00:05:12.726 [2024-12-05 09:37:00.299188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.987 [2024-12-05 09:37:00.392112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58934 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58934 00:05:13.558 09:37:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58934 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58934 ']' 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58934 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58934 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.558 killing process with pid 58934 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58934' 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58934 00:05:13.558 09:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58934 00:05:14.938 00:05:14.938 real 0m2.227s 00:05:14.938 user 0m2.159s 00:05:14.938 sys 0m0.410s 00:05:14.938 09:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.938 09:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.938 ************************************ 00:05:14.938 END TEST default_locks_via_rpc 00:05:14.938 ************************************ 00:05:14.938 09:37:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.938 09:37:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.938 09:37:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.938 09:37:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.938 ************************************ 00:05:14.938 START TEST non_locking_app_on_locked_coremask 00:05:14.938 ************************************ 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58986 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58986 /var/tmp/spdk.sock 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58986 ']' 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.938 09:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.938 [2024-12-05 09:37:02.403328] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:14.938 [2024-12-05 09:37:02.403422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:05:14.938 [2024-12-05 09:37:02.551240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.195 [2024-12-05 09:37:02.633979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59002 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59002 /var/tmp/spdk2.sock 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59002 ']' 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.761 09:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.761 [2024-12-05 09:37:03.316676] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:15.762 [2024-12-05 09:37:03.316799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:05:16.019 [2024-12-05 09:37:03.480003] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.019 [2024-12-05 09:37:03.480048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.019 [2024-12-05 09:37:03.633308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.040 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.040 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.040 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58986 00:05:17.040 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.040 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58986 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58986 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58986 ']' 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58986 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58986 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.308 killing process with pid 58986 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58986' 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58986 00:05:17.308 09:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58986 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59002 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59002 ']' 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59002 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59002 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.836 killing process with pid 59002 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59002' 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59002 00:05:19.836 09:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59002 00:05:21.211 00:05:21.211 real 0m6.383s 00:05:21.211 user 0m6.638s 00:05:21.211 sys 0m0.819s 00:05:21.211 09:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.211 09:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.211 ************************************ 00:05:21.211 END TEST non_locking_app_on_locked_coremask 00:05:21.211 ************************************ 00:05:21.211 09:37:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:21.211 09:37:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.211 09:37:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.211 09:37:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.211 ************************************ 00:05:21.211 START TEST locking_app_on_unlocked_coremask 00:05:21.211 ************************************ 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59093 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59093 /var/tmp/spdk.sock 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59093 ']' 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.211 09:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:21.211 [2024-12-05 09:37:08.836710] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:21.211 [2024-12-05 09:37:08.836827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59093 ] 00:05:21.469 [2024-12-05 09:37:08.990577] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.469 [2024-12-05 09:37:08.990613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.470 [2024-12-05 09:37:09.070244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.404 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59109 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59109 /var/tmp/spdk2.sock 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59109 ']' 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.405 09:37:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.405 [2024-12-05 09:37:09.743977] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:22.405 [2024-12-05 09:37:09.744114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59109 ] 00:05:22.405 [2024-12-05 09:37:09.909066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.664 [2024-12-05 09:37:10.070903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.597 09:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.597 09:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.597 09:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59109 00:05:23.597 09:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59109 00:05:23.597 09:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59093 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59093 ']' 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59093 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59093 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.855 killing process with pid 59093 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59093' 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59093 00:05:23.855 09:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59093 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59109 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59109 ']' 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59109 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59109 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.382 killing process with pid 59109 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59109' 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59109 00:05:26.382 09:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59109 00:05:27.343 00:05:27.343 real 0m6.117s 00:05:27.343 user 0m6.403s 00:05:27.343 sys 0m0.794s 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.343 ************************************ 00:05:27.343 END TEST locking_app_on_unlocked_coremask 00:05:27.343 ************************************ 00:05:27.343 09:37:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.343 09:37:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.343 09:37:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.343 09:37:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.343 ************************************ 00:05:27.343 START TEST locking_app_on_locked_coremask 00:05:27.343 ************************************ 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59200 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59200 /var/tmp/spdk.sock 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59200 ']' 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.343 09:37:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.602 [2024-12-05 09:37:14.982430] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:27.602 [2024-12-05 09:37:14.982534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59200 ] 00:05:27.602 [2024-12-05 09:37:15.129762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.602 [2024-12-05 09:37:15.209352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59216 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59216 /var/tmp/spdk2.sock 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59216 /var/tmp/spdk2.sock 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59216 /var/tmp/spdk2.sock 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59216 ']' 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.536 09:37:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.536 [2024-12-05 09:37:15.895118] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:28.536 [2024-12-05 09:37:15.895237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59216 ] 00:05:28.536 [2024-12-05 09:37:16.055804] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59200 has claimed it. 00:05:28.536 [2024-12-05 09:37:16.055852] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.102 ERROR: process (pid: 59216) is no longer running 00:05:29.102 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59216) - No such process 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59200 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59200 00:05:29.102 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59200 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59200 ']' 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59200 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59200 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59200' 00:05:29.360 killing process with pid 59200 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59200 00:05:29.360 09:37:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59200 00:05:30.736 00:05:30.736 real 0m3.066s 00:05:30.736 user 0m3.296s 00:05:30.736 sys 0m0.565s 00:05:30.736 09:37:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.736 ************************************ 00:05:30.736 END TEST locking_app_on_locked_coremask 00:05:30.736 ************************************ 00:05:30.736 09:37:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.736 09:37:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.736 09:37:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.736 09:37:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.736 09:37:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.736 ************************************ 00:05:30.736 START TEST locking_overlapped_coremask 00:05:30.736 ************************************ 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59275 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59275 /var/tmp/spdk.sock 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59275 ']' 00:05:30.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.736 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.737 [2024-12-05 09:37:18.119196] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:30.737 [2024-12-05 09:37:18.119319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59275 ] 00:05:30.737 [2024-12-05 09:37:18.274946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.737 [2024-12-05 09:37:18.359566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.737 [2024-12-05 09:37:18.359741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.737 [2024-12-05 09:37:18.359835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.675 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.675 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.675 09:37:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59287 00:05:31.675 09:37:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59287 /var/tmp/spdk2.sock 00:05:31.675 09:37:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.675 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59287 /var/tmp/spdk2.sock 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59287 /var/tmp/spdk2.sock 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59287 ']' 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.676 09:37:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.676 [2024-12-05 09:37:19.026150] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:31.676 [2024-12-05 09:37:19.026254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:05:31.676 [2024-12-05 09:37:19.198377] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59275 has claimed it. 00:05:31.676 [2024-12-05 09:37:19.198435] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.243 ERROR: process (pid: 59287) is no longer running 00:05:32.243 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59287) - No such process 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59275 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59275 ']' 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59275 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59275 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.243 killing process with pid 59275 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59275' 00:05:32.243 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59275 00:05:32.244 09:37:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59275 00:05:33.618 00:05:33.618 real 0m2.811s 00:05:33.618 user 0m7.694s 00:05:33.618 sys 0m0.419s 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.618 ************************************ 00:05:33.618 END TEST locking_overlapped_coremask 00:05:33.618 ************************************ 00:05:33.618 09:37:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:33.618 09:37:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.618 09:37:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.618 09:37:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.618 ************************************ 00:05:33.618 START TEST locking_overlapped_coremask_via_rpc 00:05:33.618 ************************************ 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:33.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59340 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59340 /var/tmp/spdk.sock 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59340 ']' 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.618 09:37:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:33.618 [2024-12-05 09:37:20.981075] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:33.618 [2024-12-05 09:37:20.981197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:05:33.618 [2024-12-05 09:37:21.138029] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.618 [2024-12-05 09:37:21.138065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.618 [2024-12-05 09:37:21.221296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.618 [2024-12-05 09:37:21.221451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.618 [2024-12-05 09:37:21.221451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59358 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59358 /var/tmp/spdk2.sock 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59358 ']' 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.184 09:37:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:34.444 [2024-12-05 09:37:21.886313] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:34.444 [2024-12-05 09:37:21.886751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59358 ] 00:05:34.444 [2024-12-05 09:37:22.060634] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.444 [2024-12-05 09:37:22.060682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.702 [2024-12-05 09:37:22.265669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.702 [2024-12-05 09:37:22.265712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.702 [2024-12-05 09:37:22.265733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.637 [2024-12-05 09:37:23.218622] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59340 has claimed it. 00:05:35.637 request: 00:05:35.637 { 00:05:35.637 "method": "framework_enable_cpumask_locks", 00:05:35.637 "req_id": 1 00:05:35.637 } 00:05:35.637 Got JSON-RPC error response 00:05:35.637 response: 00:05:35.637 { 00:05:35.637 "code": -32603, 00:05:35.637 "message": "Failed to claim CPU core: 2" 00:05:35.637 } 00:05:35.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59340 /var/tmp/spdk.sock 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59340 ']' 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.637 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59358 /var/tmp/spdk2.sock 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59358 ']' 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.895 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.156 ************************************ 00:05:36.156 END TEST locking_overlapped_coremask_via_rpc 00:05:36.156 ************************************ 00:05:36.156 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.156 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.156 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:36.156 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.156 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.156 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.156 00:05:36.156 real 0m2.728s 00:05:36.156 user 0m1.053s 00:05:36.156 sys 0m0.134s 00:05:36.156 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.156 09:37:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.156 09:37:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:36.156 09:37:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59340 ]] 00:05:36.156 09:37:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59340 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59340 ']' 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59340 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59340 00:05:36.156 killing process with pid 59340 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59340' 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59340 00:05:36.156 09:37:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59340 00:05:37.608 09:37:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59358 ]] 00:05:37.608 09:37:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59358 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59358 ']' 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59358 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59358 00:05:37.608 killing process with pid 59358 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59358' 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59358 00:05:37.608 09:37:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59358 00:05:38.542 09:37:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:38.542 Process with pid 59340 is not found 00:05:38.542 Process with pid 59358 is not found 00:05:38.542 09:37:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:38.542 09:37:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59340 ]] 00:05:38.542 09:37:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59340 00:05:38.542 09:37:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59340 ']' 00:05:38.542 09:37:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59340 00:05:38.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59340) - No such process 00:05:38.542 09:37:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59340 is not found' 00:05:38.542 09:37:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59358 ]] 00:05:38.542 09:37:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59358 00:05:38.542 09:37:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59358 ']' 00:05:38.542 09:37:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59358 00:05:38.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59358) - No such process 00:05:38.542 09:37:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59358 is not found' 00:05:38.542 09:37:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:38.542 00:05:38.542 real 0m28.658s 00:05:38.542 user 0m48.823s 00:05:38.542 sys 0m4.373s 00:05:38.542 ************************************ 00:05:38.542 END TEST cpu_locks 00:05:38.542 ************************************ 00:05:38.542 09:37:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.542 09:37:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.542 00:05:38.542 real 0m55.301s 00:05:38.542 user 1m41.644s 00:05:38.542 sys 0m7.110s 00:05:38.542 09:37:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.542 09:37:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.542 ************************************ 00:05:38.542 END TEST event 00:05:38.542 ************************************ 00:05:38.542 09:37:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:38.542 09:37:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.542 09:37:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.542 09:37:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.542 ************************************ 00:05:38.542 START TEST thread 00:05:38.542 ************************************ 00:05:38.542 09:37:26 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:38.800 * Looking for test storage... 00:05:38.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.800 09:37:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.800 09:37:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.800 09:37:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.800 09:37:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.800 09:37:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.800 09:37:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.800 09:37:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.800 09:37:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.800 09:37:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.800 09:37:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.800 09:37:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.800 09:37:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:38.800 09:37:26 thread -- scripts/common.sh@345 -- # : 1 00:05:38.800 09:37:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.800 09:37:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.800 09:37:26 thread -- scripts/common.sh@365 -- # decimal 1 00:05:38.800 09:37:26 thread -- scripts/common.sh@353 -- # local d=1 00:05:38.800 09:37:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.800 09:37:26 thread -- scripts/common.sh@355 -- # echo 1 00:05:38.800 09:37:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.800 09:37:26 thread -- scripts/common.sh@366 -- # decimal 2 00:05:38.800 09:37:26 thread -- scripts/common.sh@353 -- # local d=2 00:05:38.800 09:37:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.800 09:37:26 thread -- scripts/common.sh@355 -- # echo 2 00:05:38.800 09:37:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.800 09:37:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.800 09:37:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.800 09:37:26 thread -- scripts/common.sh@368 -- # return 0 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.800 --rc genhtml_branch_coverage=1 00:05:38.800 --rc genhtml_function_coverage=1 00:05:38.800 --rc genhtml_legend=1 00:05:38.800 --rc geninfo_all_blocks=1 00:05:38.800 --rc geninfo_unexecuted_blocks=1 00:05:38.800 00:05:38.800 ' 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.800 --rc genhtml_branch_coverage=1 00:05:38.800 --rc genhtml_function_coverage=1 00:05:38.800 --rc genhtml_legend=1 00:05:38.800 --rc geninfo_all_blocks=1 00:05:38.800 --rc geninfo_unexecuted_blocks=1 00:05:38.800 00:05:38.800 ' 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.800 --rc genhtml_branch_coverage=1 00:05:38.800 --rc genhtml_function_coverage=1 00:05:38.800 --rc genhtml_legend=1 00:05:38.800 --rc geninfo_all_blocks=1 00:05:38.800 --rc geninfo_unexecuted_blocks=1 00:05:38.800 00:05:38.800 ' 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.800 --rc genhtml_branch_coverage=1 00:05:38.800 --rc genhtml_function_coverage=1 00:05:38.800 --rc genhtml_legend=1 00:05:38.800 --rc geninfo_all_blocks=1 00:05:38.800 --rc geninfo_unexecuted_blocks=1 00:05:38.800 00:05:38.800 ' 00:05:38.800 09:37:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:38.800 09:37:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:38.801 09:37:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.801 09:37:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.801 ************************************ 00:05:38.801 START TEST thread_poller_perf 00:05:38.801 ************************************ 00:05:38.801 09:37:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:38.801 [2024-12-05 09:37:26.302337] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:38.801 [2024-12-05 09:37:26.302436] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59513 ] 00:05:39.059 [2024-12-05 09:37:26.462381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.059 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:39.059 [2024-12-05 09:37:26.558681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.434 [2024-12-05T09:37:28.063Z] ====================================== 00:05:40.434 [2024-12-05T09:37:28.064Z] busy:2609378352 (cyc) 00:05:40.435 [2024-12-05T09:37:28.064Z] total_run_count: 306000 00:05:40.435 [2024-12-05T09:37:28.064Z] tsc_hz: 2600000000 (cyc) 00:05:40.435 [2024-12-05T09:37:28.064Z] ====================================== 00:05:40.435 [2024-12-05T09:37:28.064Z] poller_cost: 8527 (cyc), 3279 (nsec) 00:05:40.435 00:05:40.435 real 0m1.439s 00:05:40.435 user 0m1.273s 00:05:40.435 sys 0m0.059s 00:05:40.435 09:37:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.435 09:37:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.435 ************************************ 00:05:40.435 END TEST thread_poller_perf 00:05:40.435 ************************************ 00:05:40.435 09:37:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:40.435 09:37:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:40.435 09:37:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.435 09:37:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.435 ************************************ 00:05:40.435 START TEST thread_poller_perf 00:05:40.435 ************************************ 00:05:40.435 09:37:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:40.435 [2024-12-05 09:37:27.793495] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:40.435 [2024-12-05 09:37:27.793613] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59549 ] 00:05:40.435 [2024-12-05 09:37:27.947280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.435 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:40.435 [2024-12-05 09:37:28.044430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.809 [2024-12-05T09:37:29.438Z] ====================================== 00:05:41.809 [2024-12-05T09:37:29.438Z] busy:2603191092 (cyc) 00:05:41.809 [2024-12-05T09:37:29.438Z] total_run_count: 3935000 00:05:41.809 [2024-12-05T09:37:29.438Z] tsc_hz: 2600000000 (cyc) 00:05:41.809 [2024-12-05T09:37:29.438Z] ====================================== 00:05:41.809 [2024-12-05T09:37:29.438Z] poller_cost: 661 (cyc), 254 (nsec) 00:05:41.809 00:05:41.809 real 0m1.437s 00:05:41.809 user 0m1.264s 00:05:41.809 sys 0m0.067s 00:05:41.809 09:37:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.809 09:37:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.809 ************************************ 00:05:41.809 END TEST thread_poller_perf 00:05:41.809 ************************************ 00:05:41.809 09:37:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:41.809 00:05:41.809 real 0m3.095s 00:05:41.809 user 0m2.644s 00:05:41.809 sys 0m0.227s 00:05:41.809 09:37:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.809 ************************************ 00:05:41.809 END TEST thread 00:05:41.809 ************************************ 00:05:41.809 09:37:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.809 09:37:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:41.809 09:37:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:41.809 09:37:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.809 09:37:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.809 09:37:29 -- common/autotest_common.sh@10 -- # set +x 00:05:41.809 ************************************ 00:05:41.809 START TEST app_cmdline 00:05:41.809 ************************************ 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:41.809 * Looking for test storage... 00:05:41.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.809 09:37:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.809 --rc genhtml_branch_coverage=1 00:05:41.809 --rc genhtml_function_coverage=1 00:05:41.809 --rc genhtml_legend=1 00:05:41.809 --rc geninfo_all_blocks=1 00:05:41.809 --rc geninfo_unexecuted_blocks=1 00:05:41.809 00:05:41.809 ' 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.809 --rc genhtml_branch_coverage=1 00:05:41.809 --rc genhtml_function_coverage=1 00:05:41.809 --rc genhtml_legend=1 00:05:41.809 --rc geninfo_all_blocks=1 00:05:41.809 --rc geninfo_unexecuted_blocks=1 00:05:41.809 00:05:41.809 ' 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.809 --rc genhtml_branch_coverage=1 00:05:41.809 --rc genhtml_function_coverage=1 00:05:41.809 --rc genhtml_legend=1 00:05:41.809 --rc geninfo_all_blocks=1 00:05:41.809 --rc geninfo_unexecuted_blocks=1 00:05:41.809 00:05:41.809 ' 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.809 --rc genhtml_branch_coverage=1 00:05:41.809 --rc genhtml_function_coverage=1 00:05:41.809 --rc genhtml_legend=1 00:05:41.809 --rc geninfo_all_blocks=1 00:05:41.809 --rc geninfo_unexecuted_blocks=1 00:05:41.809 00:05:41.809 ' 00:05:41.809 09:37:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:41.809 09:37:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59633 00:05:41.809 09:37:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59633 00:05:41.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59633 ']' 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.809 09:37:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:41.809 09:37:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:42.068 [2024-12-05 09:37:29.502378] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:42.068 [2024-12-05 09:37:29.502503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59633 ] 00:05:42.068 [2024-12-05 09:37:29.660308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.326 [2024-12-05 09:37:29.760527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.892 09:37:30 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.892 09:37:30 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:42.892 09:37:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:43.150 { 00:05:43.150 "version": "SPDK v25.01-pre git sha1 8d3947977", 00:05:43.150 "fields": { 00:05:43.150 "major": 25, 00:05:43.150 "minor": 1, 00:05:43.150 "patch": 0, 00:05:43.150 "suffix": "-pre", 00:05:43.150 "commit": "8d3947977" 00:05:43.150 } 00:05:43.150 } 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:43.150 09:37:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:43.150 09:37:30 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.150 request: 00:05:43.150 { 00:05:43.150 "method": "env_dpdk_get_mem_stats", 00:05:43.150 "req_id": 1 00:05:43.150 } 00:05:43.150 Got JSON-RPC error response 00:05:43.150 response: 00:05:43.150 { 00:05:43.150 "code": -32601, 00:05:43.150 "message": "Method not found" 00:05:43.150 } 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.408 09:37:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59633 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59633 ']' 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59633 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59633 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.408 killing process with pid 59633 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59633' 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@973 -- # kill 59633 00:05:43.408 09:37:30 app_cmdline -- common/autotest_common.sh@978 -- # wait 59633 00:05:44.779 00:05:44.779 real 0m3.006s 00:05:44.779 user 0m3.271s 00:05:44.779 sys 0m0.461s 00:05:44.779 09:37:32 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.779 ************************************ 00:05:44.779 END TEST app_cmdline 00:05:44.779 ************************************ 00:05:44.779 09:37:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.779 09:37:32 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:44.779 09:37:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.779 09:37:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.779 09:37:32 -- common/autotest_common.sh@10 -- # set +x 00:05:44.779 ************************************ 00:05:44.779 START TEST version 00:05:44.779 ************************************ 00:05:44.779 09:37:32 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:44.779 * Looking for test storage... 00:05:45.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:45.038 09:37:32 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.038 09:37:32 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.038 09:37:32 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.038 09:37:32 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.038 09:37:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.038 09:37:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.038 09:37:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.038 09:37:32 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.038 09:37:32 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.038 09:37:32 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.038 09:37:32 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.038 09:37:32 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.038 09:37:32 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.038 09:37:32 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.038 09:37:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.038 09:37:32 version -- scripts/common.sh@344 -- # case "$op" in 00:05:45.038 09:37:32 version -- scripts/common.sh@345 -- # : 1 00:05:45.038 09:37:32 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.038 09:37:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.038 09:37:32 version -- scripts/common.sh@365 -- # decimal 1 00:05:45.038 09:37:32 version -- scripts/common.sh@353 -- # local d=1 00:05:45.038 09:37:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.038 09:37:32 version -- scripts/common.sh@355 -- # echo 1 00:05:45.038 09:37:32 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.038 09:37:32 version -- scripts/common.sh@366 -- # decimal 2 00:05:45.038 09:37:32 version -- scripts/common.sh@353 -- # local d=2 00:05:45.038 09:37:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.038 09:37:32 version -- scripts/common.sh@355 -- # echo 2 00:05:45.039 09:37:32 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.039 09:37:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.039 09:37:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.039 09:37:32 version -- scripts/common.sh@368 -- # return 0 00:05:45.039 09:37:32 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.039 09:37:32 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.039 --rc genhtml_branch_coverage=1 00:05:45.039 --rc genhtml_function_coverage=1 00:05:45.039 --rc genhtml_legend=1 00:05:45.039 --rc geninfo_all_blocks=1 00:05:45.039 --rc geninfo_unexecuted_blocks=1 00:05:45.039 00:05:45.039 ' 00:05:45.039 09:37:32 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.039 --rc genhtml_branch_coverage=1 00:05:45.039 --rc genhtml_function_coverage=1 00:05:45.039 --rc genhtml_legend=1 00:05:45.039 --rc geninfo_all_blocks=1 00:05:45.039 --rc geninfo_unexecuted_blocks=1 00:05:45.039 00:05:45.039 ' 00:05:45.039 09:37:32 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.039 --rc genhtml_branch_coverage=1 00:05:45.039 --rc genhtml_function_coverage=1 00:05:45.039 --rc genhtml_legend=1 00:05:45.039 --rc geninfo_all_blocks=1 00:05:45.039 --rc geninfo_unexecuted_blocks=1 00:05:45.039 00:05:45.039 ' 00:05:45.039 09:37:32 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.039 --rc genhtml_branch_coverage=1 00:05:45.039 --rc genhtml_function_coverage=1 00:05:45.039 --rc genhtml_legend=1 00:05:45.039 --rc geninfo_all_blocks=1 00:05:45.039 --rc geninfo_unexecuted_blocks=1 00:05:45.039 00:05:45.039 ' 00:05:45.039 09:37:32 version -- app/version.sh@17 -- # get_header_version major 00:05:45.039 09:37:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.039 09:37:32 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.039 09:37:32 version -- app/version.sh@14 -- # cut -f2 00:05:45.039 09:37:32 version -- app/version.sh@17 -- # major=25 00:05:45.039 09:37:32 version -- app/version.sh@18 -- # get_header_version minor 00:05:45.039 09:37:32 version -- app/version.sh@14 -- # cut -f2 00:05:45.039 09:37:32 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.039 09:37:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.039 09:37:32 version -- app/version.sh@18 -- # minor=1 00:05:45.039 09:37:32 version -- app/version.sh@19 -- # get_header_version patch 00:05:45.039 09:37:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.039 09:37:32 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.039 09:37:32 version -- app/version.sh@14 -- # cut -f2 00:05:45.039 09:37:32 version -- app/version.sh@19 -- # patch=0 00:05:45.039 09:37:32 version -- app/version.sh@20 -- # get_header_version suffix 00:05:45.039 09:37:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.039 09:37:32 version -- app/version.sh@14 -- # cut -f2 00:05:45.039 09:37:32 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.039 09:37:32 version -- app/version.sh@20 -- # suffix=-pre 00:05:45.039 09:37:32 version -- app/version.sh@22 -- # version=25.1 00:05:45.039 09:37:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:45.039 09:37:32 version -- app/version.sh@28 -- # version=25.1rc0 00:05:45.039 09:37:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:45.039 09:37:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:45.039 09:37:32 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:45.039 09:37:32 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:45.039 00:05:45.039 real 0m0.173s 00:05:45.039 user 0m0.106s 00:05:45.039 sys 0m0.094s 00:05:45.039 09:37:32 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.039 ************************************ 00:05:45.039 END TEST version 00:05:45.039 ************************************ 00:05:45.039 09:37:32 version -- common/autotest_common.sh@10 -- # set +x 00:05:45.039 09:37:32 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:45.039 09:37:32 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:45.039 09:37:32 -- spdk/autotest.sh@194 -- # uname -s 00:05:45.039 09:37:32 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:45.039 09:37:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.039 09:37:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.039 09:37:32 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:45.039 09:37:32 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:45.039 09:37:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.039 09:37:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.039 09:37:32 -- common/autotest_common.sh@10 -- # set +x 00:05:45.039 ************************************ 00:05:45.039 START TEST blockdev_nvme 00:05:45.039 ************************************ 00:05:45.039 09:37:32 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:45.039 * Looking for test storage... 00:05:45.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:45.039 09:37:32 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.039 09:37:32 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.039 09:37:32 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.298 09:37:32 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.298 09:37:32 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:45.298 09:37:32 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.298 09:37:32 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.298 --rc genhtml_branch_coverage=1 00:05:45.298 --rc genhtml_function_coverage=1 00:05:45.298 --rc genhtml_legend=1 00:05:45.298 --rc geninfo_all_blocks=1 00:05:45.298 --rc geninfo_unexecuted_blocks=1 00:05:45.298 00:05:45.298 ' 00:05:45.298 09:37:32 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.298 --rc genhtml_branch_coverage=1 00:05:45.298 --rc genhtml_function_coverage=1 00:05:45.298 --rc genhtml_legend=1 00:05:45.298 --rc geninfo_all_blocks=1 00:05:45.298 --rc geninfo_unexecuted_blocks=1 00:05:45.298 00:05:45.298 ' 00:05:45.298 09:37:32 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.298 --rc genhtml_branch_coverage=1 00:05:45.298 --rc genhtml_function_coverage=1 00:05:45.298 --rc genhtml_legend=1 00:05:45.298 --rc geninfo_all_blocks=1 00:05:45.298 --rc geninfo_unexecuted_blocks=1 00:05:45.298 00:05:45.298 ' 00:05:45.298 09:37:32 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.298 --rc genhtml_branch_coverage=1 00:05:45.298 --rc genhtml_function_coverage=1 00:05:45.298 --rc genhtml_legend=1 00:05:45.298 --rc geninfo_all_blocks=1 00:05:45.298 --rc geninfo_unexecuted_blocks=1 00:05:45.298 00:05:45.298 ' 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:45.298 09:37:32 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:45.298 09:37:32 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59805 00:05:45.299 09:37:32 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:45.299 09:37:32 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59805 00:05:45.299 09:37:32 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59805 ']' 00:05:45.299 09:37:32 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.299 09:37:32 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.299 09:37:32 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.299 09:37:32 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.299 09:37:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:45.299 09:37:32 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:45.299 [2024-12-05 09:37:32.789080] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:45.299 [2024-12-05 09:37:32.789202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59805 ] 00:05:45.556 [2024-12-05 09:37:32.954757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.556 [2024-12-05 09:37:33.052436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.206 09:37:33 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.206 09:37:33 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:46.206 09:37:33 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:46.206 09:37:33 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:46.206 09:37:33 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:46.206 09:37:33 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:46.206 09:37:33 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:46.206 09:37:33 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:46.206 09:37:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.206 09:37:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.465 09:37:33 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.465 09:37:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:46.465 09:37:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.465 09:37:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.465 09:37:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.465 09:37:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.465 09:37:34 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:46.465 09:37:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.465 09:37:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.465 09:37:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.465 09:37:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:46.465 09:37:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:46.465 09:37:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.465 09:37:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.465 09:37:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:46.465 09:37:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.465 09:37:34 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:46.465 09:37:34 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:46.466 09:37:34 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6a9ca257-fe6c-41b6-91b1-37e1f4e0e09c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6a9ca257-fe6c-41b6-91b1-37e1f4e0e09c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "cd922828-adf3-4c5a-af60-70cb0a3113d1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cd922828-adf3-4c5a-af60-70cb0a3113d1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "42a82c98-b106-4495-88c2-05a9dac35b42"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "42a82c98-b106-4495-88c2-05a9dac35b42",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e9e93d04-54e9-4c38-a3d7-c87b2bfcaa89"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9e93d04-54e9-4c38-a3d7-c87b2bfcaa89",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "31a5d42b-074c-4368-9a41-d53df7723b8f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31a5d42b-074c-4368-9a41-d53df7723b8f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6113c262-c341-4566-bbd9-359c347b8caa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6113c262-c341-4566-bbd9-359c347b8caa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:46.724 09:37:34 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:46.724 09:37:34 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:46.724 09:37:34 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:46.724 09:37:34 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59805 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59805 ']' 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59805 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59805 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59805' 00:05:46.724 killing process with pid 59805 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59805 00:05:46.724 09:37:34 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59805 00:05:48.100 09:37:35 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:48.100 09:37:35 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:48.100 09:37:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:48.100 09:37:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.100 09:37:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.100 ************************************ 00:05:48.100 START TEST bdev_hello_world 00:05:48.100 ************************************ 00:05:48.100 09:37:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:48.100 [2024-12-05 09:37:35.362356] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:48.100 [2024-12-05 09:37:35.362472] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59883 ] 00:05:48.100 [2024-12-05 09:37:35.517560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.100 [2024-12-05 09:37:35.598307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.665 [2024-12-05 09:37:36.094410] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:48.665 [2024-12-05 09:37:36.094457] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:48.665 [2024-12-05 09:37:36.094472] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:48.665 [2024-12-05 09:37:36.096499] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:48.665 [2024-12-05 09:37:36.097040] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:48.665 [2024-12-05 09:37:36.097063] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:48.665 [2024-12-05 09:37:36.097195] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:48.665 00:05:48.665 [2024-12-05 09:37:36.097208] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:49.361 00:05:49.361 real 0m1.358s 00:05:49.361 user 0m1.092s 00:05:49.361 sys 0m0.160s 00:05:49.361 ************************************ 00:05:49.361 END TEST bdev_hello_world 00:05:49.361 ************************************ 00:05:49.361 09:37:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.361 09:37:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:49.361 09:37:36 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:49.361 09:37:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:49.361 09:37:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.361 09:37:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.361 ************************************ 00:05:49.361 START TEST bdev_bounds 00:05:49.361 ************************************ 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:49.361 Process bdevio pid: 59920 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59920 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59920' 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59920 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59920 ']' 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:49.361 09:37:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:49.361 [2024-12-05 09:37:36.766142] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:49.361 [2024-12-05 09:37:36.766267] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59920 ] 00:05:49.361 [2024-12-05 09:37:36.922359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.621 [2024-12-05 09:37:37.008499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.621 [2024-12-05 09:37:37.008669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.621 [2024-12-05 09:37:37.008669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.193 09:37:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.193 09:37:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:50.193 09:37:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:50.193 I/O targets: 00:05:50.193 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:50.193 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:50.193 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:50.193 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:50.193 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:50.193 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:50.193 00:05:50.193 00:05:50.193 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.193 http://cunit.sourceforge.net/ 00:05:50.193 00:05:50.193 00:05:50.193 Suite: bdevio tests on: Nvme3n1 00:05:50.193 Test: blockdev write read block ...passed 00:05:50.193 Test: blockdev write zeroes read block ...passed 00:05:50.193 Test: blockdev write zeroes read no split ...passed 00:05:50.193 Test: blockdev write zeroes read split ...passed 00:05:50.193 Test: blockdev write zeroes read split partial ...passed 00:05:50.193 Test: blockdev reset ...[2024-12-05 09:37:37.738214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:50.193 [2024-12-05 09:37:37.742101] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:50.193 passed 00:05:50.193 Test: blockdev write read 8 blocks ...passed 00:05:50.193 Test: blockdev write read size > 128k ...passed 00:05:50.193 Test: blockdev write read invalid size ...passed 00:05:50.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:50.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:50.193 Test: blockdev write read max offset ...passed 00:05:50.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:50.193 Test: blockdev writev readv 8 blocks ...passed 00:05:50.193 Test: blockdev writev readv 30 x 1block ...passed 00:05:50.193 Test: blockdev writev readv block ...passed 00:05:50.193 Test: blockdev writev readv size > 128k ...passed 00:05:50.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:50.193 Test: blockdev comparev and writev ...[2024-12-05 09:37:37.758171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b760a000 len:0x1000 00:05:50.193 [2024-12-05 09:37:37.758221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:50.193 passed 00:05:50.193 Test: blockdev nvme passthru rw ...passed 00:05:50.193 Test: blockdev nvme passthru vendor specific ...[2024-12-05 09:37:37.759819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:05:50.193 Test: blockdev nvme admin passthru ...RP2 0x0 00:05:50.193 [2024-12-05 09:37:37.760010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:50.193 passed 00:05:50.193 Test: blockdev copy ...passed 00:05:50.193 Suite: bdevio tests on: Nvme2n3 00:05:50.193 Test: blockdev write read block ...passed 00:05:50.193 Test: blockdev write zeroes read block ...passed 00:05:50.193 Test: blockdev write zeroes read no split ...passed 00:05:50.193 Test: blockdev write zeroes read split ...passed 00:05:50.455 Test: blockdev write zeroes read split partial ...passed 00:05:50.455 Test: blockdev reset ...[2024-12-05 09:37:37.822158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:50.455 [2024-12-05 09:37:37.828651] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:50.455 Test: blockdev write read 8 blocks ...uccessful. 00:05:50.455 passed 00:05:50.455 Test: blockdev write read size > 128k ...passed 00:05:50.455 Test: blockdev write read invalid size ...passed 00:05:50.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:50.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:50.455 Test: blockdev write read max offset ...passed 00:05:50.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:50.455 Test: blockdev writev readv 8 blocks ...passed 00:05:50.455 Test: blockdev writev readv 30 x 1block ...passed 00:05:50.455 Test: blockdev writev readv block ...passed 00:05:50.455 Test: blockdev writev readv size > 128k ...passed 00:05:50.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:50.455 Test: blockdev comparev and writev ...[2024-12-05 09:37:37.842755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29a006000 len:0x1000 00:05:50.455 [2024-12-05 09:37:37.842793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:50.455 passed 00:05:50.455 Test: blockdev nvme passthru rw ...passed 00:05:50.455 Test: blockdev nvme passthru vendor specific ...passed 00:05:50.455 Test: blockdev nvme admin passthru ...[2024-12-05 09:37:37.844192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:50.455 [2024-12-05 09:37:37.844222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:50.455 passed 00:05:50.455 Test: blockdev copy ...passed 00:05:50.455 Suite: bdevio tests on: Nvme2n2 00:05:50.455 Test: blockdev write read block ...passed 00:05:50.455 Test: blockdev write zeroes read block ...passed 00:05:50.455 Test: blockdev write zeroes read no split ...passed 00:05:50.455 Test: blockdev write zeroes read split ...passed 00:05:50.455 Test: blockdev write zeroes read split partial ...passed 00:05:50.455 Test: blockdev reset ...[2024-12-05 09:37:37.903809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:50.455 [2024-12-05 09:37:37.908134] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:50.455 Test: blockdev write read 8 blocks ...uccessful. 00:05:50.455 passed 00:05:50.455 Test: blockdev write read size > 128k ...passed 00:05:50.455 Test: blockdev write read invalid size ...passed 00:05:50.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:50.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:50.455 Test: blockdev write read max offset ...passed 00:05:50.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:50.455 Test: blockdev writev readv 8 blocks ...passed 00:05:50.455 Test: blockdev writev readv 30 x 1block ...passed 00:05:50.455 Test: blockdev writev readv block ...passed 00:05:50.455 Test: blockdev writev readv size > 128k ...passed 00:05:50.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:50.455 Test: blockdev comparev and writev ...[2024-12-05 09:37:37.923562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d223c000 len:0x1000 00:05:50.455 [2024-12-05 09:37:37.923597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:50.455 passed 00:05:50.455 Test: blockdev nvme passthru rw ...passed 00:05:50.455 Test: blockdev nvme passthru vendor specific ...passed 00:05:50.455 Test: blockdev nvme admin passthru ...[2024-12-05 09:37:37.925379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:50.455 [2024-12-05 09:37:37.925405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:50.455 passed 00:05:50.455 Test: blockdev copy ...passed 00:05:50.455 Suite: bdevio tests on: Nvme2n1 00:05:50.455 Test: blockdev write read block ...passed 00:05:50.455 Test: blockdev write zeroes read block ...passed 00:05:50.455 Test: blockdev write zeroes read no split ...passed 00:05:50.455 Test: blockdev write zeroes read split ...passed 00:05:50.455 Test: blockdev write zeroes read split partial ...passed 00:05:50.455 Test: blockdev reset ...[2024-12-05 09:37:37.984566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:50.455 [2024-12-05 09:37:37.987600] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:50.455 passed 00:05:50.455 Test: blockdev write read 8 blocks ...passed 00:05:50.455 Test: blockdev write read size > 128k ...passed 00:05:50.455 Test: blockdev write read invalid size ...passed 00:05:50.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:50.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:50.455 Test: blockdev write read max offset ...passed 00:05:50.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:50.455 Test: blockdev writev readv 8 blocks ...passed 00:05:50.455 Test: blockdev writev readv 30 x 1block ...passed 00:05:50.455 Test: blockdev writev readv block ...passed 00:05:50.455 Test: blockdev writev readv size > 128k ...passed 00:05:50.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:50.455 Test: blockdev comparev and writev ...[2024-12-05 09:37:37.997740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2238000 len:0x1000 00:05:50.455 [2024-12-05 09:37:37.997780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:50.455 passed 00:05:50.455 Test: blockdev nvme passthru rw ...passed 00:05:50.455 Test: blockdev nvme passthru vendor specific ...passed 00:05:50.455 Test: blockdev nvme admin passthru ...[2024-12-05 09:37:37.999231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:50.455 [2024-12-05 09:37:37.999263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:50.456 passed 00:05:50.456 Test: blockdev copy ...passed 00:05:50.456 Suite: bdevio tests on: Nvme1n1 00:05:50.456 Test: blockdev write read block ...passed 00:05:50.456 Test: blockdev write zeroes read block ...passed 00:05:50.456 Test: blockdev write zeroes read no split ...passed 00:05:50.456 Test: blockdev write zeroes read split ...passed 00:05:50.456 Test: blockdev write zeroes read split partial ...passed 00:05:50.456 Test: blockdev reset ...[2024-12-05 09:37:38.057269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:50.456 [2024-12-05 09:37:38.060467] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:05:50.456 Test: blockdev write read 8 blocks ...uccessful. 00:05:50.456 passed 00:05:50.456 Test: blockdev write read size > 128k ...passed 00:05:50.456 Test: blockdev write read invalid size ...passed 00:05:50.456 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:50.456 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:50.456 Test: blockdev write read max offset ...passed 00:05:50.456 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:50.456 Test: blockdev writev readv 8 blocks ...passed 00:05:50.456 Test: blockdev writev readv 30 x 1block ...passed 00:05:50.456 Test: blockdev writev readv block ...passed 00:05:50.456 Test: blockdev writev readv size > 128k ...passed 00:05:50.456 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:50.456 Test: blockdev comparev and writev ...[2024-12-05 09:37:38.077717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2234000 len:0x1000 00:05:50.456 [2024-12-05 09:37:38.077772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:50.456 passed 00:05:50.456 Test: blockdev nvme passthru rw ...passed 00:05:50.456 Test: blockdev nvme passthru vendor specific ...passed 00:05:50.456 Test: blockdev nvme admin passthru ...[2024-12-05 09:37:38.080252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:50.456 [2024-12-05 09:37:38.080291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:50.716 passed 00:05:50.716 Test: blockdev copy ...passed 00:05:50.717 Suite: bdevio tests on: Nvme0n1 00:05:50.717 Test: blockdev write read block ...passed 00:05:50.717 Test: blockdev write zeroes read block ...passed 00:05:50.717 Test: blockdev write zeroes read no split ...passed 00:05:50.717 Test: blockdev write zeroes read split ...passed 00:05:50.717 Test: blockdev write zeroes read split partial ...passed 00:05:50.717 Test: blockdev reset ...[2024-12-05 09:37:38.138938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:50.717 [2024-12-05 09:37:38.143479] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:05:50.717 Test: blockdev write read 8 blocks ...uccessful. 00:05:50.717 passed 00:05:50.717 Test: blockdev write read size > 128k ...passed 00:05:50.717 Test: blockdev write read invalid size ...passed 00:05:50.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:50.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:50.717 Test: blockdev write read max offset ...passed 00:05:50.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:50.717 Test: blockdev writev readv 8 blocks ...passed 00:05:50.717 Test: blockdev writev readv 30 x 1block ...passed 00:05:50.717 Test: blockdev writev readv block ...passed 00:05:50.717 Test: blockdev writev readv size > 128k ...passed 00:05:50.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:50.717 Test: blockdev comparev and writev ...passed 00:05:50.717 Test: blockdev nvme passthru rw ...[2024-12-05 09:37:38.158550] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:50.717 separate metadata which is not supported yet. 00:05:50.717 passed 00:05:50.717 Test: blockdev nvme passthru vendor specific ...passed 00:05:50.717 Test: blockdev nvme admin passthru ...[2024-12-05 09:37:38.160002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:50.717 [2024-12-05 09:37:38.160131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:50.717 passed 00:05:50.717 Test: blockdev copy ...passed 00:05:50.717 00:05:50.717 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.717 suites 6 6 n/a 0 0 00:05:50.717 tests 138 138 138 0 0 00:05:50.717 asserts 893 893 893 0 n/a 00:05:50.717 00:05:50.717 Elapsed time = 1.220 seconds 00:05:50.717 0 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59920 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59920 ']' 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59920 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59920 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59920' 00:05:50.717 killing process with pid 59920 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59920 00:05:50.717 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59920 00:05:51.286 09:37:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:51.286 00:05:51.286 real 0m2.124s 00:05:51.286 user 0m5.428s 00:05:51.286 sys 0m0.265s 00:05:51.286 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.286 09:37:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:51.286 ************************************ 00:05:51.286 END TEST bdev_bounds 00:05:51.286 ************************************ 00:05:51.286 09:37:38 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:51.286 09:37:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:51.286 09:37:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.286 09:37:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:51.286 ************************************ 00:05:51.286 START TEST bdev_nbd 00:05:51.286 ************************************ 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:51.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59974 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59974 /var/tmp/spdk-nbd.sock 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59974 ']' 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:51.286 09:37:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:51.545 [2024-12-05 09:37:38.940182] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:51.545 [2024-12-05 09:37:38.940303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:51.545 [2024-12-05 09:37:39.102344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.802 [2024-12-05 09:37:39.201918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:52.367 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:52.626 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:52.626 09:37:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.626 1+0 records in 00:05:52.626 1+0 records out 00:05:52.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332631 s, 12.3 MB/s 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.626 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.885 1+0 records in 00:05:52.885 1+0 records out 00:05:52.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468889 s, 8.7 MB/s 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.885 1+0 records in 00:05:52.885 1+0 records out 00:05:52.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459741 s, 8.9 MB/s 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:52.885 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:53.143 1+0 records in 00:05:53.143 1+0 records out 00:05:53.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474885 s, 8.6 MB/s 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:53.143 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:53.401 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:53.401 09:37:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:53.401 1+0 records in 00:05:53.401 1+0 records out 00:05:53.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333847 s, 12.3 MB/s 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:53.401 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:53.662 1+0 records in 00:05:53.662 1+0 records out 00:05:53.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475118 s, 8.6 MB/s 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:53.662 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd0", 00:05:53.921 "bdev_name": "Nvme0n1" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd1", 00:05:53.921 "bdev_name": "Nvme1n1" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd2", 00:05:53.921 "bdev_name": "Nvme2n1" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd3", 00:05:53.921 "bdev_name": "Nvme2n2" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd4", 00:05:53.921 "bdev_name": "Nvme2n3" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd5", 00:05:53.921 "bdev_name": "Nvme3n1" 00:05:53.921 } 00:05:53.921 ]' 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd0", 00:05:53.921 "bdev_name": "Nvme0n1" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd1", 00:05:53.921 "bdev_name": "Nvme1n1" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd2", 00:05:53.921 "bdev_name": "Nvme2n1" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd3", 00:05:53.921 "bdev_name": "Nvme2n2" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd4", 00:05:53.921 "bdev_name": "Nvme2n3" 00:05:53.921 }, 00:05:53.921 { 00:05:53.921 "nbd_device": "/dev/nbd5", 00:05:53.921 "bdev_name": "Nvme3n1" 00:05:53.921 } 00:05:53.921 ]' 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:53.921 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.922 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.180 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.438 09:37:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.696 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.955 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.214 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.474 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.474 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.474 09:37:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:55.474 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:55.734 /dev/nbd0 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.734 1+0 records in 00:05:55.734 1+0 records out 00:05:55.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476701 s, 8.6 MB/s 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:55.734 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:55.993 /dev/nbd1 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.993 1+0 records in 00:05:55.993 1+0 records out 00:05:55.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428243 s, 9.6 MB/s 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:55.993 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:56.252 /dev/nbd10 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.252 1+0 records in 00:05:56.252 1+0 records out 00:05:56.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518332 s, 7.9 MB/s 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:56.252 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:56.510 /dev/nbd11 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.510 1+0 records in 00:05:56.510 1+0 records out 00:05:56.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478838 s, 8.6 MB/s 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:56.510 09:37:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:56.769 /dev/nbd12 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.769 1+0 records in 00:05:56.769 1+0 records out 00:05:56.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032834 s, 12.5 MB/s 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:56.769 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:56.769 /dev/nbd13 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.028 1+0 records in 00:05:57.028 1+0 records out 00:05:57.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044161 s, 9.3 MB/s 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd0", 00:05:57.028 "bdev_name": "Nvme0n1" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd1", 00:05:57.028 "bdev_name": "Nvme1n1" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd10", 00:05:57.028 "bdev_name": "Nvme2n1" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd11", 00:05:57.028 "bdev_name": "Nvme2n2" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd12", 00:05:57.028 "bdev_name": "Nvme2n3" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd13", 00:05:57.028 "bdev_name": "Nvme3n1" 00:05:57.028 } 00:05:57.028 ]' 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd0", 00:05:57.028 "bdev_name": "Nvme0n1" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd1", 00:05:57.028 "bdev_name": "Nvme1n1" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd10", 00:05:57.028 "bdev_name": "Nvme2n1" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd11", 00:05:57.028 "bdev_name": "Nvme2n2" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd12", 00:05:57.028 "bdev_name": "Nvme2n3" 00:05:57.028 }, 00:05:57.028 { 00:05:57.028 "nbd_device": "/dev/nbd13", 00:05:57.028 "bdev_name": "Nvme3n1" 00:05:57.028 } 00:05:57.028 ]' 00:05:57.028 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.287 /dev/nbd1 00:05:57.287 /dev/nbd10 00:05:57.287 /dev/nbd11 00:05:57.287 /dev/nbd12 00:05:57.287 /dev/nbd13' 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.287 /dev/nbd1 00:05:57.287 /dev/nbd10 00:05:57.287 /dev/nbd11 00:05:57.287 /dev/nbd12 00:05:57.287 /dev/nbd13' 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:57.287 256+0 records in 00:05:57.287 256+0 records out 00:05:57.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712709 s, 147 MB/s 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.287 256+0 records in 00:05:57.287 256+0 records out 00:05:57.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0530966 s, 19.7 MB/s 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.287 256+0 records in 00:05:57.287 256+0 records out 00:05:57.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0716757 s, 14.6 MB/s 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:57.287 256+0 records in 00:05:57.287 256+0 records out 00:05:57.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0712283 s, 14.7 MB/s 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.287 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:57.546 256+0 records in 00:05:57.546 256+0 records out 00:05:57.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0631301 s, 16.6 MB/s 00:05:57.546 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:57.546 256+0 records in 00:05:57.546 256+0 records out 00:05:57.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0625163 s, 16.8 MB/s 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:57.546 256+0 records in 00:05:57.546 256+0 records out 00:05:57.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0629142 s, 16.7 MB/s 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.546 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.804 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.061 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.318 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.575 09:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.575 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.833 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:59.091 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:59.349 malloc_lvol_verify 00:05:59.349 09:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:59.608 991e8d7d-0fd5-4463-9538-54ceea8805f7 00:05:59.608 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:59.866 0ef3f38c-e43b-414a-b939-ba10d5a3e370 00:05:59.866 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:59.866 /dev/nbd0 00:05:59.866 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:59.866 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:59.866 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:59.866 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:59.866 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:59.866 mke2fs 1.47.0 (5-Feb-2023) 00:05:59.866 Discarding device blocks: 0/4096 done 00:05:59.866 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:59.866 00:05:59.866 Allocating group tables: 0/1 done 00:05:59.866 Writing inode tables: 0/1 done 00:06:00.124 Creating journal (1024 blocks): done 00:06:00.124 Writing superblocks and filesystem accounting information: 0/1 done 00:06:00.124 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59974 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59974 ']' 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59974 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59974 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.124 killing process with pid 59974 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59974' 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59974 00:06:00.124 09:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59974 00:06:01.060 09:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:01.060 00:06:01.060 real 0m9.485s 00:06:01.060 user 0m13.709s 00:06:01.060 sys 0m3.046s 00:06:01.060 09:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.060 ************************************ 00:06:01.060 END TEST bdev_nbd 00:06:01.060 ************************************ 00:06:01.060 09:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:01.060 09:37:48 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:01.060 09:37:48 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:01.060 skipping fio tests on NVMe due to multi-ns failures. 00:06:01.060 09:37:48 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:01.060 09:37:48 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:01.060 09:37:48 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:01.060 09:37:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:01.060 09:37:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.060 09:37:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.060 ************************************ 00:06:01.060 START TEST bdev_verify 00:06:01.060 ************************************ 00:06:01.060 09:37:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:01.060 [2024-12-05 09:37:48.444332] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:01.060 [2024-12-05 09:37:48.444422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60347 ] 00:06:01.060 [2024-12-05 09:37:48.594712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.060 [2024-12-05 09:37:48.677417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.060 [2024-12-05 09:37:48.677494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.631 Running I/O for 5 seconds... 00:06:03.942 23872.00 IOPS, 93.25 MiB/s [2024-12-05T09:37:52.508Z] 24704.00 IOPS, 96.50 MiB/s [2024-12-05T09:37:53.451Z] 24960.00 IOPS, 97.50 MiB/s [2024-12-05T09:37:54.393Z] 25008.00 IOPS, 97.69 MiB/s [2024-12-05T09:37:54.393Z] 25241.60 IOPS, 98.60 MiB/s 00:06:06.764 Latency(us) 00:06:06.764 [2024-12-05T09:37:54.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:06.764 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x0 length 0xbd0bd 00:06:06.764 Nvme0n1 : 5.04 2159.50 8.44 0.00 0.00 59070.48 10637.00 79449.80 00:06:06.764 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:06.764 Nvme0n1 : 5.03 1983.27 7.75 0.00 0.00 64296.43 10939.47 66140.95 00:06:06.764 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x0 length 0xa0000 00:06:06.764 Nvme1n1 : 5.04 2159.02 8.43 0.00 0.00 58919.00 11040.30 64931.05 00:06:06.764 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0xa0000 length 0xa0000 00:06:06.764 Nvme1n1 : 5.06 1985.34 7.76 0.00 0.00 64089.91 7410.61 60898.07 00:06:06.764 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x0 length 0x80000 00:06:06.764 Nvme2n1 : 5.06 2162.77 8.45 0.00 0.00 58663.73 4486.70 59284.87 00:06:06.764 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x80000 length 0x80000 00:06:06.764 Nvme2n1 : 5.07 1993.24 7.79 0.00 0.00 63830.49 9074.22 60494.77 00:06:06.764 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x0 length 0x80000 00:06:06.764 Nvme2n2 : 5.07 2170.84 8.48 0.00 0.00 58393.78 8570.09 57268.38 00:06:06.764 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x80000 length 0x80000 00:06:06.764 Nvme2n2 : 5.08 1992.31 7.78 0.00 0.00 63731.07 10384.94 58074.98 00:06:06.764 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x0 length 0x80000 00:06:06.764 Nvme2n3 : 5.07 2170.22 8.48 0.00 0.00 58258.87 9124.63 64931.05 00:06:06.764 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x80000 length 0x80000 00:06:06.764 Nvme2n3 : 5.08 1991.75 7.78 0.00 0.00 63623.97 10838.65 60091.47 00:06:06.764 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x0 length 0x20000 00:06:06.764 Nvme3n1 : 5.07 2169.61 8.48 0.00 0.00 58191.07 9578.34 67350.84 00:06:06.764 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:06.764 Verification LBA range: start 0x20000 length 0x20000 00:06:06.764 Nvme3n1 : 5.08 1991.23 7.78 0.00 0.00 63525.17 8418.86 62914.56 00:06:06.764 [2024-12-05T09:37:54.393Z] =================================================================================================================== 00:06:06.764 [2024-12-05T09:37:54.393Z] Total : 24929.09 97.38 0.00 0.00 61105.21 4486.70 79449.80 00:06:08.176 00:06:08.176 real 0m7.000s 00:06:08.176 user 0m13.177s 00:06:08.176 sys 0m0.221s 00:06:08.176 09:37:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.176 ************************************ 00:06:08.176 END TEST bdev_verify 00:06:08.176 ************************************ 00:06:08.176 09:37:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 09:37:55 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:08.176 09:37:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:08.176 09:37:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.176 09:37:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.176 ************************************ 00:06:08.176 START TEST bdev_verify_big_io 00:06:08.176 ************************************ 00:06:08.176 09:37:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:08.176 [2024-12-05 09:37:55.519861] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:08.176 [2024-12-05 09:37:55.520008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60440 ] 00:06:08.176 [2024-12-05 09:37:55.679693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.436 [2024-12-05 09:37:55.807885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.436 [2024-12-05 09:37:55.807951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.004 Running I/O for 5 seconds... 00:06:14.140 1117.00 IOPS, 69.81 MiB/s [2024-12-05T09:38:02.705Z] 2230.00 IOPS, 139.38 MiB/s [2024-12-05T09:38:02.705Z] 2697.33 IOPS, 168.58 MiB/s 00:06:15.076 Latency(us) 00:06:15.076 [2024-12-05T09:38:02.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:15.076 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x0 length 0xbd0b 00:06:15.076 Nvme0n1 : 5.77 121.93 7.62 0.00 0.00 1020836.56 13308.85 1084066.26 00:06:15.076 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:15.076 Nvme0n1 : 5.83 131.77 8.24 0.00 0.00 952824.71 18047.61 935652.43 00:06:15.076 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x0 length 0xa000 00:06:15.076 Nvme1n1 : 5.78 120.50 7.53 0.00 0.00 1003478.57 49202.41 916294.10 00:06:15.076 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0xa000 length 0xa000 00:06:15.076 Nvme1n1 : 5.83 128.31 8.02 0.00 0.00 946267.93 25105.33 1000180.18 00:06:15.076 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x0 length 0x8000 00:06:15.076 Nvme2n1 : 5.78 118.88 7.43 0.00 0.00 986119.77 51420.55 1167952.34 00:06:15.076 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x8000 length 0x8000 00:06:15.076 Nvme2n1 : 5.83 127.75 7.98 0.00 0.00 922409.49 27021.00 1000180.18 00:06:15.076 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x0 length 0x8000 00:06:15.076 Nvme2n2 : 5.83 118.66 7.42 0.00 0.00 957359.96 25609.45 1742249.35 00:06:15.076 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x8000 length 0x8000 00:06:15.076 Nvme2n2 : 5.84 128.51 8.03 0.00 0.00 890965.08 27424.30 1000180.18 00:06:15.076 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x0 length 0x8000 00:06:15.076 Nvme2n3 : 5.84 122.88 7.68 0.00 0.00 894267.37 26416.05 1768060.46 00:06:15.076 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x8000 length 0x8000 00:06:15.076 Nvme2n3 : 5.84 131.55 8.22 0.00 0.00 847994.62 51017.26 1000180.18 00:06:15.076 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x0 length 0x2000 00:06:15.076 Nvme3n1 : 5.95 164.44 10.28 0.00 0.00 648059.47 288.30 1819682.66 00:06:15.076 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:15.076 Verification LBA range: start 0x2000 length 0x2000 00:06:15.076 Nvme3n1 : 5.89 148.13 9.26 0.00 0.00 735527.88 781.39 1000180.18 00:06:15.076 [2024-12-05T09:38:02.705Z] =================================================================================================================== 00:06:15.076 [2024-12-05T09:38:02.705Z] Total : 1563.31 97.71 0.00 0.00 889810.61 288.30 1819682.66 00:06:16.449 00:06:16.449 real 0m8.542s 00:06:16.449 user 0m16.068s 00:06:16.449 sys 0m0.281s 00:06:16.449 09:38:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.449 ************************************ 00:06:16.449 END TEST bdev_verify_big_io 00:06:16.449 ************************************ 00:06:16.449 09:38:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:16.449 09:38:04 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.449 09:38:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:16.449 09:38:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.449 09:38:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.449 ************************************ 00:06:16.449 START TEST bdev_write_zeroes 00:06:16.449 ************************************ 00:06:16.449 09:38:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.707 [2024-12-05 09:38:04.101631] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:16.707 [2024-12-05 09:38:04.101745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60551 ] 00:06:16.707 [2024-12-05 09:38:04.254434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.965 [2024-12-05 09:38:04.354126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.532 Running I/O for 1 seconds... 00:06:18.463 79488.00 IOPS, 310.50 MiB/s 00:06:18.463 Latency(us) 00:06:18.463 [2024-12-05T09:38:06.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.463 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.463 Nvme0n1 : 1.02 13165.06 51.43 0.00 0.00 9703.14 8368.44 19358.33 00:06:18.463 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.463 Nvme1n1 : 1.02 13150.11 51.37 0.00 0.00 9701.85 8368.44 19156.68 00:06:18.464 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.464 Nvme2n1 : 1.02 13135.26 51.31 0.00 0.00 9692.99 8368.44 18551.73 00:06:18.464 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.464 Nvme2n2 : 1.02 13120.45 51.25 0.00 0.00 9676.68 7360.20 18148.43 00:06:18.464 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.464 Nvme2n3 : 1.03 13105.61 51.19 0.00 0.00 9674.12 7158.55 17946.78 00:06:18.464 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:18.464 Nvme3n1 : 1.03 13090.66 51.14 0.00 0.00 9667.60 6553.60 19459.15 00:06:18.464 [2024-12-05T09:38:06.093Z] =================================================================================================================== 00:06:18.464 [2024-12-05T09:38:06.093Z] Total : 78767.15 307.68 0.00 0.00 9686.07 6553.60 19459.15 00:06:19.397 00:06:19.397 real 0m2.678s 00:06:19.397 user 0m2.375s 00:06:19.397 sys 0m0.191s 00:06:19.397 09:38:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.397 09:38:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:19.397 ************************************ 00:06:19.397 END TEST bdev_write_zeroes 00:06:19.397 ************************************ 00:06:19.397 09:38:06 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:19.397 09:38:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:19.397 09:38:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.397 09:38:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.397 ************************************ 00:06:19.397 START TEST bdev_json_nonenclosed 00:06:19.397 ************************************ 00:06:19.397 09:38:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:19.397 [2024-12-05 09:38:06.819644] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:19.397 [2024-12-05 09:38:06.819760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60604 ] 00:06:19.397 [2024-12-05 09:38:06.981787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.655 [2024-12-05 09:38:07.079989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.655 [2024-12-05 09:38:07.080064] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:19.655 [2024-12-05 09:38:07.080080] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:19.655 [2024-12-05 09:38:07.080090] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.655 00:06:19.655 real 0m0.499s 00:06:19.655 user 0m0.308s 00:06:19.655 sys 0m0.087s 00:06:19.655 09:38:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.655 ************************************ 00:06:19.655 END TEST bdev_json_nonenclosed 00:06:19.655 09:38:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:19.655 ************************************ 00:06:19.912 09:38:07 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:19.912 09:38:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:19.912 09:38:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.912 09:38:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.912 ************************************ 00:06:19.912 START TEST bdev_json_nonarray 00:06:19.912 ************************************ 00:06:19.912 09:38:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:19.912 [2024-12-05 09:38:07.362698] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:19.912 [2024-12-05 09:38:07.362808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60624 ] 00:06:19.912 [2024-12-05 09:38:07.522422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.170 [2024-12-05 09:38:07.619999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.170 [2024-12-05 09:38:07.620083] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:20.170 [2024-12-05 09:38:07.620101] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:20.170 [2024-12-05 09:38:07.620110] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.170 00:06:20.170 real 0m0.497s 00:06:20.170 user 0m0.304s 00:06:20.170 sys 0m0.089s 00:06:20.170 09:38:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.170 09:38:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:20.170 ************************************ 00:06:20.170 END TEST bdev_json_nonarray 00:06:20.170 ************************************ 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:20.429 09:38:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:20.429 ************************************ 00:06:20.429 END TEST blockdev_nvme 00:06:20.429 ************************************ 00:06:20.429 00:06:20.429 real 0m35.263s 00:06:20.429 user 0m55.300s 00:06:20.429 sys 0m5.025s 00:06:20.429 09:38:07 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.429 09:38:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.429 09:38:07 -- spdk/autotest.sh@209 -- # uname -s 00:06:20.429 09:38:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:20.429 09:38:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:20.429 09:38:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:20.429 09:38:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.429 09:38:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.429 ************************************ 00:06:20.429 START TEST blockdev_nvme_gpt 00:06:20.429 ************************************ 00:06:20.429 09:38:07 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:20.429 * Looking for test storage... 00:06:20.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:20.429 09:38:07 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.429 09:38:07 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.429 09:38:07 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.429 09:38:07 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.429 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.430 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:20.430 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:20.430 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.430 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:20.430 09:38:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.430 09:38:08 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:20.430 09:38:08 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:20.430 09:38:08 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.430 09:38:08 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:20.430 09:38:08 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.430 09:38:08 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.430 09:38:08 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.430 09:38:08 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.430 --rc genhtml_branch_coverage=1 00:06:20.430 --rc genhtml_function_coverage=1 00:06:20.430 --rc genhtml_legend=1 00:06:20.430 --rc geninfo_all_blocks=1 00:06:20.430 --rc geninfo_unexecuted_blocks=1 00:06:20.430 00:06:20.430 ' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.430 --rc genhtml_branch_coverage=1 00:06:20.430 --rc genhtml_function_coverage=1 00:06:20.430 --rc genhtml_legend=1 00:06:20.430 --rc geninfo_all_blocks=1 00:06:20.430 --rc geninfo_unexecuted_blocks=1 00:06:20.430 00:06:20.430 ' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.430 --rc genhtml_branch_coverage=1 00:06:20.430 --rc genhtml_function_coverage=1 00:06:20.430 --rc genhtml_legend=1 00:06:20.430 --rc geninfo_all_blocks=1 00:06:20.430 --rc geninfo_unexecuted_blocks=1 00:06:20.430 00:06:20.430 ' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.430 --rc genhtml_branch_coverage=1 00:06:20.430 --rc genhtml_function_coverage=1 00:06:20.430 --rc genhtml_legend=1 00:06:20.430 --rc geninfo_all_blocks=1 00:06:20.430 --rc geninfo_unexecuted_blocks=1 00:06:20.430 00:06:20.430 ' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60708 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60708 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60708 ']' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.430 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:20.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.430 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.688 [2024-12-05 09:38:08.086163] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:20.688 [2024-12-05 09:38:08.086289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60708 ] 00:06:20.688 [2024-12-05 09:38:08.245635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.946 [2024-12-05 09:38:08.342985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.512 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.512 09:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:21.512 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:21.512 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:21.512 09:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:21.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:21.770 Waiting for block devices as requested 00:06:21.770 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.028 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.028 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.028 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.291 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:27.291 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:27.291 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:27.291 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:27.291 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:27.291 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:27.291 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:27.291 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:27.292 BYT; 00:06:27.292 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:27.292 BYT; 00:06:27.292 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:27.292 09:38:14 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:27.292 09:38:14 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:28.223 The operation has completed successfully. 00:06:28.223 09:38:15 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:29.157 The operation has completed successfully. 00:06:29.157 09:38:16 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:29.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:29.982 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.982 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.982 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:30.240 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:30.240 09:38:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:30.240 09:38:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.240 09:38:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:30.240 [] 00:06:30.240 09:38:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.240 09:38:17 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:30.240 09:38:17 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:30.240 09:38:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:30.240 09:38:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:30.240 09:38:17 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:30.240 09:38:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.240 09:38:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:30.499 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:30.499 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:30.758 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "bcab15da-389d-49a8-874d-f4d469a83ce1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "bcab15da-389d-49a8-874d-f4d469a83ce1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0d3309ba-e72b-46b8-b6fa-f3c30b4e2857"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d3309ba-e72b-46b8-b6fa-f3c30b4e2857",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c80d13d8-12e2-4ca9-86ce-647b4c12b81b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c80d13d8-12e2-4ca9-86ce-647b4c12b81b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9be79180-11c0-47b4-8733-2de12da34735"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9be79180-11c0-47b4-8733-2de12da34735",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e79fb818-8755-40b8-981e-3dabcdf8512c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e79fb818-8755-40b8-981e-3dabcdf8512c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:30.758 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:30.758 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:30.758 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:30.758 09:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60708 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60708 ']' 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60708 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60708 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.758 killing process with pid 60708 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60708' 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60708 00:06:30.758 09:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60708 00:06:32.130 09:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:32.130 09:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:32.130 09:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:32.130 09:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.130 09:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.130 ************************************ 00:06:32.130 START TEST bdev_hello_world 00:06:32.130 ************************************ 00:06:32.130 09:38:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:32.130 [2024-12-05 09:38:19.677321] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:32.130 [2024-12-05 09:38:19.677441] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61333 ] 00:06:32.388 [2024-12-05 09:38:19.838771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.388 [2024-12-05 09:38:19.940564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.953 [2024-12-05 09:38:20.483956] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:32.953 [2024-12-05 09:38:20.484010] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:32.953 [2024-12-05 09:38:20.484034] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:32.953 [2024-12-05 09:38:20.486473] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:32.953 [2024-12-05 09:38:20.486871] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:32.953 [2024-12-05 09:38:20.486900] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:32.953 [2024-12-05 09:38:20.486986] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:32.953 00:06:32.953 [2024-12-05 09:38:20.487004] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:33.885 00:06:33.885 real 0m1.583s 00:06:33.885 user 0m1.306s 00:06:33.885 sys 0m0.169s 00:06:33.885 09:38:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.885 ************************************ 00:06:33.885 END TEST bdev_hello_world 00:06:33.885 ************************************ 00:06:33.885 09:38:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:33.885 09:38:21 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:33.885 09:38:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.885 09:38:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.885 09:38:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:33.885 ************************************ 00:06:33.885 START TEST bdev_bounds 00:06:33.885 ************************************ 00:06:33.885 09:38:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:33.885 09:38:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61369 00:06:33.885 09:38:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.885 09:38:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:33.885 Process bdevio pid: 61369 00:06:33.885 09:38:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61369' 00:06:33.885 09:38:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61369 00:06:33.886 09:38:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61369 ']' 00:06:33.886 09:38:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.886 09:38:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.886 09:38:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.886 09:38:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.886 09:38:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:33.886 [2024-12-05 09:38:21.301065] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:33.886 [2024-12-05 09:38:21.301190] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61369 ] 00:06:33.886 [2024-12-05 09:38:21.459857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.143 [2024-12-05 09:38:21.564680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.143 [2024-12-05 09:38:21.564892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.143 [2024-12-05 09:38:21.564953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.710 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.710 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:34.710 09:38:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:34.710 I/O targets: 00:06:34.710 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:34.710 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:34.710 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:34.710 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:34.710 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:34.710 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:34.710 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:34.710 00:06:34.710 00:06:34.710 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.710 http://cunit.sourceforge.net/ 00:06:34.710 00:06:34.710 00:06:34.710 Suite: bdevio tests on: Nvme3n1 00:06:34.710 Test: blockdev write read block ...passed 00:06:34.710 Test: blockdev write zeroes read block ...passed 00:06:34.710 Test: blockdev write zeroes read no split ...passed 00:06:34.710 Test: blockdev write zeroes read split ...passed 00:06:34.710 Test: blockdev write zeroes read split partial ...passed 00:06:34.710 Test: blockdev reset ...[2024-12-05 09:38:22.278489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:34.710 [2024-12-05 09:38:22.281372] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:34.710 passed 00:06:34.710 Test: blockdev write read 8 blocks ...passed 00:06:34.710 Test: blockdev write read size > 128k ...passed 00:06:34.710 Test: blockdev write read invalid size ...passed 00:06:34.710 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.710 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.710 Test: blockdev write read max offset ...passed 00:06:34.710 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.710 Test: blockdev writev readv 8 blocks ...passed 00:06:34.710 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.710 Test: blockdev writev readv block ...passed 00:06:34.710 Test: blockdev writev readv size > 128k ...passed 00:06:34.710 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.710 Test: blockdev comparev and writev ...[2024-12-05 09:38:22.287941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4a04000 len:0x1000 00:06:34.710 passed 00:06:34.710 Test: blockdev nvme passthru rw ...[2024-12-05 09:38:22.287990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.710 passed 00:06:34.710 Test: blockdev nvme passthru vendor specific ...[2024-12-05 09:38:22.288662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.710 [2024-12-05 09:38:22.288690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.710 passed 00:06:34.710 Test: blockdev nvme admin passthru ...passed 00:06:34.710 Test: blockdev copy ...passed 00:06:34.710 Suite: bdevio tests on: Nvme2n3 00:06:34.710 Test: blockdev write read block ...passed 00:06:34.710 Test: blockdev write zeroes read block ...passed 00:06:34.710 Test: blockdev write zeroes read no split ...passed 00:06:34.710 Test: blockdev write zeroes read split ...passed 00:06:34.968 Test: blockdev write zeroes read split partial ...passed 00:06:34.968 Test: blockdev reset ...[2024-12-05 09:38:22.346765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:34.968 [2024-12-05 09:38:22.349679] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:34.968 passed 00:06:34.969 Test: blockdev write read 8 blocks ...passed 00:06:34.969 Test: blockdev write read size > 128k ...passed 00:06:34.969 Test: blockdev write read invalid size ...passed 00:06:34.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.969 Test: blockdev write read max offset ...passed 00:06:34.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.969 Test: blockdev writev readv 8 blocks ...passed 00:06:34.969 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.969 Test: blockdev writev readv block ...passed 00:06:34.969 Test: blockdev writev readv size > 128k ...passed 00:06:34.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.969 Test: blockdev comparev and writev ...[2024-12-05 09:38:22.356267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4a02000 len:0x1000 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme passthru rw ...[2024-12-05 09:38:22.356311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme passthru vendor specific ...[2024-12-05 09:38:22.356825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.969 [2024-12-05 09:38:22.356853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme admin passthru ...passed 00:06:34.969 Test: blockdev copy ...passed 00:06:34.969 Suite: bdevio tests on: Nvme2n2 00:06:34.969 Test: blockdev write read block ...passed 00:06:34.969 Test: blockdev write zeroes read block ...passed 00:06:34.969 Test: blockdev write zeroes read no split ...passed 00:06:34.969 Test: blockdev write zeroes read split ...passed 00:06:34.969 Test: blockdev write zeroes read split partial ...passed 00:06:34.969 Test: blockdev reset ...[2024-12-05 09:38:22.413793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:34.969 [2024-12-05 09:38:22.418235] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:34.969 passed 00:06:34.969 Test: blockdev write read 8 blocks ...passed 00:06:34.969 Test: blockdev write read size > 128k ...passed 00:06:34.969 Test: blockdev write read invalid size ...passed 00:06:34.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.969 Test: blockdev write read max offset ...passed 00:06:34.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.969 Test: blockdev writev readv 8 blocks ...passed 00:06:34.969 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.969 Test: blockdev writev readv block ...passed 00:06:34.969 Test: blockdev writev readv size > 128k ...passed 00:06:34.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.969 Test: blockdev comparev and writev ...[2024-12-05 09:38:22.425056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dba38000 len:0x1000 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme passthru rw ...[2024-12-05 09:38:22.425097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme passthru vendor specific ...[2024-12-05 09:38:22.425600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme admin passthru ...[2024-12-05 09:38:22.425623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.969 passed 00:06:34.969 Test: blockdev copy ...passed 00:06:34.969 Suite: bdevio tests on: Nvme2n1 00:06:34.969 Test: blockdev write read block ...passed 00:06:34.969 Test: blockdev write zeroes read block ...passed 00:06:34.969 Test: blockdev write zeroes read no split ...passed 00:06:34.969 Test: blockdev write zeroes read split ...passed 00:06:34.969 Test: blockdev write zeroes read split partial ...passed 00:06:34.969 Test: blockdev reset ...[2024-12-05 09:38:22.481099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:34.969 [2024-12-05 09:38:22.484152] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:34.969 passed 00:06:34.969 Test: blockdev write read 8 blocks ...passed 00:06:34.969 Test: blockdev write read size > 128k ...passed 00:06:34.969 Test: blockdev write read invalid size ...passed 00:06:34.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.969 Test: blockdev write read max offset ...passed 00:06:34.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.969 Test: blockdev writev readv 8 blocks ...passed 00:06:34.969 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.969 Test: blockdev writev readv block ...passed 00:06:34.969 Test: blockdev writev readv size > 128k ...passed 00:06:34.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.969 Test: blockdev comparev and writev ...[2024-12-05 09:38:22.490903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dba34000 len:0x1000 00:06:34.969 [2024-12-05 09:38:22.490948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme passthru rw ...passed 00:06:34.969 Test: blockdev nvme passthru vendor specific ...[2024-12-05 09:38:22.491586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.969 [2024-12-05 09:38:22.491611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme admin passthru ...passed 00:06:34.969 Test: blockdev copy ...passed 00:06:34.969 Suite: bdevio tests on: Nvme1n1p2 00:06:34.969 Test: blockdev write read block ...passed 00:06:34.969 Test: blockdev write zeroes read block ...passed 00:06:34.969 Test: blockdev write zeroes read no split ...passed 00:06:34.969 Test: blockdev write zeroes read split ...passed 00:06:34.969 Test: blockdev write zeroes read split partial ...passed 00:06:34.969 Test: blockdev reset ...[2024-12-05 09:38:22.550255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:34.969 [2024-12-05 09:38:22.552868] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:34.969 passed 00:06:34.969 Test: blockdev write read 8 blocks ...passed 00:06:34.969 Test: blockdev write read size > 128k ...passed 00:06:34.969 Test: blockdev write read invalid size ...passed 00:06:34.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.969 Test: blockdev write read max offset ...passed 00:06:34.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.969 Test: blockdev writev readv 8 blocks ...passed 00:06:34.969 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.969 Test: blockdev writev readv block ...passed 00:06:34.969 Test: blockdev writev readv size > 128k ...passed 00:06:34.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.969 Test: blockdev comparev and writev ...[2024-12-05 09:38:22.562723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dba30000 len:0x1000 00:06:34.969 [2024-12-05 09:38:22.562863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.969 passed 00:06:34.969 Test: blockdev nvme passthru rw ...passed 00:06:34.969 Test: blockdev nvme passthru vendor specific ...passed 00:06:34.969 Test: blockdev nvme admin passthru ...passed 00:06:34.970 Test: blockdev copy ...passed 00:06:34.970 Suite: bdevio tests on: Nvme1n1p1 00:06:34.970 Test: blockdev write read block ...passed 00:06:34.970 Test: blockdev write zeroes read block ...passed 00:06:34.970 Test: blockdev write zeroes read no split ...passed 00:06:34.970 Test: blockdev write zeroes read split ...passed 00:06:35.228 Test: blockdev write zeroes read split partial ...passed 00:06:35.228 Test: blockdev reset ...[2024-12-05 09:38:22.609174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:35.228 [2024-12-05 09:38:22.611811] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:35.228 passed 00:06:35.228 Test: blockdev write read 8 blocks ...passed 00:06:35.228 Test: blockdev write read size > 128k ...passed 00:06:35.228 Test: blockdev write read invalid size ...passed 00:06:35.228 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.228 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.228 Test: blockdev write read max offset ...passed 00:06:35.228 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.228 Test: blockdev writev readv 8 blocks ...passed 00:06:35.228 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.228 Test: blockdev writev readv block ...passed 00:06:35.228 Test: blockdev writev readv size > 128k ...passed 00:06:35.228 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.228 Test: blockdev comparev and writev ...[2024-12-05 09:38:22.619550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b4c0e000 len:0x1000 00:06:35.228 [2024-12-05 09:38:22.619658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.228 passed 00:06:35.228 Test: blockdev nvme passthru rw ...passed 00:06:35.228 Test: blockdev nvme passthru vendor specific ...passed 00:06:35.228 Test: blockdev nvme admin passthru ...passed 00:06:35.228 Test: blockdev copy ...passed 00:06:35.228 Suite: bdevio tests on: Nvme0n1 00:06:35.228 Test: blockdev write read block ...passed 00:06:35.228 Test: blockdev write zeroes read block ...passed 00:06:35.228 Test: blockdev write zeroes read no split ...passed 00:06:35.228 Test: blockdev write zeroes read split ...passed 00:06:35.228 Test: blockdev write zeroes read split partial ...passed 00:06:35.228 Test: blockdev reset ...[2024-12-05 09:38:22.664249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:35.228 [2024-12-05 09:38:22.666711] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:35.228 passed 00:06:35.228 Test: blockdev write read 8 blocks ...passed 00:06:35.228 Test: blockdev write read size > 128k ...passed 00:06:35.228 Test: blockdev write read invalid size ...passed 00:06:35.228 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.228 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.228 Test: blockdev write read max offset ...passed 00:06:35.228 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.228 Test: blockdev writev readv 8 blocks ...passed 00:06:35.228 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.228 Test: blockdev writev readv block ...passed 00:06:35.228 Test: blockdev writev readv size > 128k ...passed 00:06:35.228 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.228 Test: blockdev comparev and writev ...[2024-12-05 09:38:22.672412] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:35.228 separate metadata which is not supported yet. 00:06:35.228 passed 00:06:35.228 Test: blockdev nvme passthru rw ...passed 00:06:35.228 Test: blockdev nvme passthru vendor specific ...[2024-12-05 09:38:22.672764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:35.228 [2024-12-05 09:38:22.672801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:35.228 passed 00:06:35.228 Test: blockdev nvme admin passthru ...passed 00:06:35.228 Test: blockdev copy ...passed 00:06:35.228 00:06:35.228 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.228 suites 7 7 n/a 0 0 00:06:35.228 tests 161 161 161 0 0 00:06:35.228 asserts 1025 1025 1025 0 n/a 00:06:35.228 00:06:35.228 Elapsed time = 1.164 seconds 00:06:35.228 0 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61369 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61369 ']' 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61369 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61369 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.228 killing process with pid 61369 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61369' 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61369 00:06:35.228 09:38:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61369 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:35.881 00:06:35.881 real 0m2.144s 00:06:35.881 user 0m5.419s 00:06:35.881 sys 0m0.287s 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:35.881 ************************************ 00:06:35.881 END TEST bdev_bounds 00:06:35.881 ************************************ 00:06:35.881 09:38:23 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.881 09:38:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:35.881 09:38:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.881 09:38:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.881 ************************************ 00:06:35.881 START TEST bdev_nbd 00:06:35.881 ************************************ 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61423 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61423 /var/tmp/spdk-nbd.sock 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61423 ']' 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:35.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.881 09:38:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:35.881 [2024-12-05 09:38:23.491675] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:35.881 [2024-12-05 09:38:23.491800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.139 [2024-12-05 09:38:23.651524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.139 [2024-12-05 09:38:23.753318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.072 1+0 records in 00:06:37.072 1+0 records out 00:06:37.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488397 s, 8.4 MB/s 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.072 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:37.329 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.330 1+0 records in 00:06:37.330 1+0 records out 00:06:37.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480131 s, 8.5 MB/s 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.330 09:38:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.588 1+0 records in 00:06:37.588 1+0 records out 00:06:37.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433703 s, 9.4 MB/s 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.588 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:37.844 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:37.844 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:37.844 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:37.844 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:37.844 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.845 1+0 records in 00:06:37.845 1+0 records out 00:06:37.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432229 s, 9.5 MB/s 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.845 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.102 1+0 records in 00:06:38.102 1+0 records out 00:06:38.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648525 s, 6.3 MB/s 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:38.102 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.360 1+0 records in 00:06:38.360 1+0 records out 00:06:38.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593729 s, 6.9 MB/s 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.360 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.618 1+0 records in 00:06:38.618 1+0 records out 00:06:38.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476632 s, 8.6 MB/s 00:06:38.618 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.618 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.618 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.618 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.618 09:38:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.618 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.618 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:38.618 09:38:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd0", 00:06:38.618 "bdev_name": "Nvme0n1" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd1", 00:06:38.618 "bdev_name": "Nvme1n1p1" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd2", 00:06:38.618 "bdev_name": "Nvme1n1p2" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd3", 00:06:38.618 "bdev_name": "Nvme2n1" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd4", 00:06:38.618 "bdev_name": "Nvme2n2" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd5", 00:06:38.618 "bdev_name": "Nvme2n3" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd6", 00:06:38.618 "bdev_name": "Nvme3n1" 00:06:38.618 } 00:06:38.618 ]' 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd0", 00:06:38.618 "bdev_name": "Nvme0n1" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd1", 00:06:38.618 "bdev_name": "Nvme1n1p1" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd2", 00:06:38.618 "bdev_name": "Nvme1n1p2" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd3", 00:06:38.618 "bdev_name": "Nvme2n1" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd4", 00:06:38.618 "bdev_name": "Nvme2n2" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd5", 00:06:38.618 "bdev_name": "Nvme2n3" 00:06:38.618 }, 00:06:38.618 { 00:06:38.618 "nbd_device": "/dev/nbd6", 00:06:38.618 "bdev_name": "Nvme3n1" 00:06:38.618 } 00:06:38.618 ]' 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.618 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.875 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.132 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.389 09:38:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.647 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.905 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.163 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.421 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:40.422 09:38:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:40.680 /dev/nbd0 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.680 1+0 records in 00:06:40.680 1+0 records out 00:06:40.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380267 s, 10.8 MB/s 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:40.680 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:40.939 /dev/nbd1 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.939 1+0 records in 00:06:40.939 1+0 records out 00:06:40.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392811 s, 10.4 MB/s 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:40.939 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:41.197 /dev/nbd10 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.197 1+0 records in 00:06:41.197 1+0 records out 00:06:41.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387805 s, 10.6 MB/s 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.197 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:41.457 /dev/nbd11 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.457 1+0 records in 00:06:41.457 1+0 records out 00:06:41.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567299 s, 7.2 MB/s 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.457 09:38:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:41.715 /dev/nbd12 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.715 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.716 1+0 records in 00:06:41.716 1+0 records out 00:06:41.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464364 s, 8.8 MB/s 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.716 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:41.974 /dev/nbd13 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.974 1+0 records in 00:06:41.974 1+0 records out 00:06:41.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349265 s, 11.7 MB/s 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:41.974 /dev/nbd14 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.974 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.975 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.234 1+0 records in 00:06:42.234 1+0 records out 00:06:42.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641345 s, 6.4 MB/s 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd0", 00:06:42.234 "bdev_name": "Nvme0n1" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd1", 00:06:42.234 "bdev_name": "Nvme1n1p1" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd10", 00:06:42.234 "bdev_name": "Nvme1n1p2" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd11", 00:06:42.234 "bdev_name": "Nvme2n1" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd12", 00:06:42.234 "bdev_name": "Nvme2n2" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd13", 00:06:42.234 "bdev_name": "Nvme2n3" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd14", 00:06:42.234 "bdev_name": "Nvme3n1" 00:06:42.234 } 00:06:42.234 ]' 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd0", 00:06:42.234 "bdev_name": "Nvme0n1" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd1", 00:06:42.234 "bdev_name": "Nvme1n1p1" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd10", 00:06:42.234 "bdev_name": "Nvme1n1p2" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd11", 00:06:42.234 "bdev_name": "Nvme2n1" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd12", 00:06:42.234 "bdev_name": "Nvme2n2" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd13", 00:06:42.234 "bdev_name": "Nvme2n3" 00:06:42.234 }, 00:06:42.234 { 00:06:42.234 "nbd_device": "/dev/nbd14", 00:06:42.234 "bdev_name": "Nvme3n1" 00:06:42.234 } 00:06:42.234 ]' 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.234 /dev/nbd1 00:06:42.234 /dev/nbd10 00:06:42.234 /dev/nbd11 00:06:42.234 /dev/nbd12 00:06:42.234 /dev/nbd13 00:06:42.234 /dev/nbd14' 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.234 /dev/nbd1 00:06:42.234 /dev/nbd10 00:06:42.234 /dev/nbd11 00:06:42.234 /dev/nbd12 00:06:42.234 /dev/nbd13 00:06:42.234 /dev/nbd14' 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.234 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.235 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:42.493 256+0 records in 00:06:42.493 256+0 records out 00:06:42.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669683 s, 157 MB/s 00:06:42.493 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.493 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.493 256+0 records in 00:06:42.493 256+0 records out 00:06:42.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0829014 s, 12.6 MB/s 00:06:42.493 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.493 09:38:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.493 256+0 records in 00:06:42.493 256+0 records out 00:06:42.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0847678 s, 12.4 MB/s 00:06:42.493 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.493 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:42.493 256+0 records in 00:06:42.493 256+0 records out 00:06:42.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0765656 s, 13.7 MB/s 00:06:42.493 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.493 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:42.751 256+0 records in 00:06:42.751 256+0 records out 00:06:42.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.083792 s, 12.5 MB/s 00:06:42.751 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.751 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:42.751 256+0 records in 00:06:42.751 256+0 records out 00:06:42.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0756511 s, 13.9 MB/s 00:06:42.751 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.751 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:42.751 256+0 records in 00:06:42.751 256+0 records out 00:06:42.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0746449 s, 14.0 MB/s 00:06:42.751 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.751 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:43.009 256+0 records in 00:06:43.009 256+0 records out 00:06:43.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0754496 s, 13.9 MB/s 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.009 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.267 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.525 09:38:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.784 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.042 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.301 09:38:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.559 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:44.818 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:45.076 malloc_lvol_verify 00:06:45.076 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:45.076 a747e85d-c748-4a0c-9dde-e82d8e4d3a30 00:06:45.076 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:45.334 2aadedf4-6e1a-45b9-88fc-ac3d13973901 00:06:45.334 09:38:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:45.592 /dev/nbd0 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:45.592 mke2fs 1.47.0 (5-Feb-2023) 00:06:45.592 Discarding device blocks: 0/4096 done 00:06:45.592 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:45.592 00:06:45.592 Allocating group tables: 0/1 done 00:06:45.592 Writing inode tables: 0/1 done 00:06:45.592 Creating journal (1024 blocks): done 00:06:45.592 Writing superblocks and filesystem accounting information: 0/1 done 00:06:45.592 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.592 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61423 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61423 ']' 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61423 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61423 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61423' 00:06:45.850 killing process with pid 61423 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61423 00:06:45.850 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61423 00:06:46.417 09:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:46.417 00:06:46.417 real 0m10.528s 00:06:46.417 user 0m15.175s 00:06:46.417 sys 0m3.446s 00:06:46.417 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.417 09:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:46.417 ************************************ 00:06:46.417 END TEST bdev_nbd 00:06:46.417 ************************************ 00:06:46.417 09:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:46.417 09:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:46.417 09:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:46.417 skipping fio tests on NVMe due to multi-ns failures. 00:06:46.417 09:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:46.417 09:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:46.417 09:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:46.417 09:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:46.417 09:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.417 09:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.417 ************************************ 00:06:46.417 START TEST bdev_verify 00:06:46.417 ************************************ 00:06:46.417 09:38:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:46.675 [2024-12-05 09:38:34.052628] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:46.675 [2024-12-05 09:38:34.052721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61837 ] 00:06:46.675 [2024-12-05 09:38:34.193456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.675 [2024-12-05 09:38:34.275858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.675 [2024-12-05 09:38:34.275920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.239 Running I/O for 5 seconds... 00:06:49.640 21248.00 IOPS, 83.00 MiB/s [2024-12-05T09:38:38.204Z] 22208.00 IOPS, 86.75 MiB/s [2024-12-05T09:38:39.138Z] 22656.00 IOPS, 88.50 MiB/s [2024-12-05T09:38:40.072Z] 22816.00 IOPS, 89.12 MiB/s [2024-12-05T09:38:40.072Z] 22809.60 IOPS, 89.10 MiB/s 00:06:52.443 Latency(us) 00:06:52.443 [2024-12-05T09:38:40.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.443 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x0 length 0xbd0bd 00:06:52.443 Nvme0n1 : 5.06 1617.45 6.32 0.00 0.00 78943.58 14720.39 83482.78 00:06:52.443 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:52.443 Nvme0n1 : 5.05 1595.32 6.23 0.00 0.00 80010.95 14417.92 85902.57 00:06:52.443 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x0 length 0x4ff80 00:06:52.443 Nvme1n1p1 : 5.07 1616.99 6.32 0.00 0.00 78838.73 15930.29 75416.81 00:06:52.443 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:52.443 Nvme1n1p1 : 5.06 1594.87 6.23 0.00 0.00 79862.32 16938.54 75820.11 00:06:52.443 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x0 length 0x4ff7f 00:06:52.443 Nvme1n1p2 : 5.07 1616.52 6.31 0.00 0.00 78693.66 15829.46 68560.74 00:06:52.443 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:52.443 Nvme1n1p2 : 5.06 1594.40 6.23 0.00 0.00 79740.31 15829.46 69367.34 00:06:52.443 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x0 length 0x80000 00:06:52.443 Nvme2n1 : 5.07 1616.07 6.31 0.00 0.00 78552.87 16031.11 65737.65 00:06:52.443 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x80000 length 0x80000 00:06:52.443 Nvme2n1 : 5.06 1593.94 6.23 0.00 0.00 79585.56 15627.82 66544.25 00:06:52.443 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x0 length 0x80000 00:06:52.443 Nvme2n2 : 5.07 1615.62 6.31 0.00 0.00 78399.32 15224.52 67754.14 00:06:52.443 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x80000 length 0x80000 00:06:52.443 Nvme2n2 : 5.07 1602.48 6.26 0.00 0.00 78997.85 3075.15 65737.65 00:06:52.443 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x0 length 0x80000 00:06:52.443 Nvme2n3 : 5.07 1615.18 6.31 0.00 0.00 78249.83 14417.92 71383.83 00:06:52.443 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x80000 length 0x80000 00:06:52.443 Nvme2n3 : 5.08 1611.55 6.30 0.00 0.00 78455.57 7057.72 69770.63 00:06:52.443 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x0 length 0x20000 00:06:52.443 Nvme3n1 : 5.08 1625.22 6.35 0.00 0.00 77640.94 1751.83 73400.32 00:06:52.443 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.443 Verification LBA range: start 0x20000 length 0x20000 00:06:52.443 Nvme3n1 : 5.08 1611.13 6.29 0.00 0.00 78301.65 7410.61 71787.13 00:06:52.443 [2024-12-05T09:38:40.072Z] =================================================================================================================== 00:06:52.443 [2024-12-05T09:38:40.072Z] Total : 22526.73 88.00 0.00 0.00 78871.75 1751.83 85902.57 00:06:53.817 00:06:53.817 real 0m7.267s 00:06:53.817 user 0m13.717s 00:06:53.817 sys 0m0.180s 00:06:53.817 09:38:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.817 09:38:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:53.817 ************************************ 00:06:53.817 END TEST bdev_verify 00:06:53.817 ************************************ 00:06:53.817 09:38:41 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:53.817 09:38:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:53.817 09:38:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.817 09:38:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.817 ************************************ 00:06:53.817 START TEST bdev_verify_big_io 00:06:53.817 ************************************ 00:06:53.817 09:38:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:53.817 [2024-12-05 09:38:41.376027] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:53.817 [2024-12-05 09:38:41.376145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61929 ] 00:06:54.075 [2024-12-05 09:38:41.534598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.075 [2024-12-05 09:38:41.640691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.075 [2024-12-05 09:38:41.640781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.008 Running I/O for 5 seconds... 00:07:00.819 1399.00 IOPS, 87.44 MiB/s [2024-12-05T09:38:48.707Z] 3063.50 IOPS, 191.47 MiB/s 00:07:01.078 Latency(us) 00:07:01.078 [2024-12-05T09:38:48.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.078 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x0 length 0xbd0b 00:07:01.078 Nvme0n1 : 5.79 110.61 6.91 0.00 0.00 1088933.02 18148.43 1258291.20 00:07:01.078 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:01.078 Nvme0n1 : 5.98 90.06 5.63 0.00 0.00 1305264.81 31457.28 1490591.11 00:07:01.078 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x0 length 0x4ff8 00:07:01.078 Nvme1n1p1 : 5.79 121.63 7.60 0.00 0.00 979061.44 100824.62 1071160.71 00:07:01.078 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:01.078 Nvme1n1p1 : 5.99 96.21 6.01 0.00 0.00 1218340.15 135508.28 1258291.20 00:07:01.078 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x0 length 0x4ff7 00:07:01.078 Nvme1n1p2 : 5.96 123.76 7.74 0.00 0.00 928667.21 82676.18 942105.21 00:07:01.078 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:01.078 Nvme1n1p2 : 6.07 98.37 6.15 0.00 0.00 1158307.63 79449.80 1109877.37 00:07:01.078 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x0 length 0x8000 00:07:01.078 Nvme2n1 : 6.02 127.22 7.95 0.00 0.00 881497.24 80659.69 1142141.24 00:07:01.078 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x8000 length 0x8000 00:07:01.078 Nvme2n1 : 6.12 97.09 6.07 0.00 0.00 1134773.07 21475.64 2026171.47 00:07:01.078 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x0 length 0x8000 00:07:01.078 Nvme2n2 : 6.02 131.84 8.24 0.00 0.00 831432.29 64124.46 967916.31 00:07:01.078 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x8000 length 0x8000 00:07:01.078 Nvme2n2 : 6.12 101.74 6.36 0.00 0.00 1041561.49 27827.59 1780966.01 00:07:01.078 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x0 length 0x8000 00:07:01.078 Nvme2n3 : 6.08 143.00 8.94 0.00 0.00 749021.43 20669.05 987274.63 00:07:01.078 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x8000 length 0x8000 00:07:01.078 Nvme2n3 : 6.19 120.27 7.52 0.00 0.00 851027.83 13107.20 2116510.33 00:07:01.078 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x0 length 0x2000 00:07:01.078 Nvme3n1 : 6.09 151.90 9.49 0.00 0.00 682648.88 3705.30 1013085.74 00:07:01.078 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:01.078 Verification LBA range: start 0x2000 length 0x2000 00:07:01.078 Nvme3n1 : 6.30 183.30 11.46 0.00 0.00 544939.61 326.10 1555118.87 00:07:01.078 [2024-12-05T09:38:48.707Z] =================================================================================================================== 00:07:01.078 [2024-12-05T09:38:48.707Z] Total : 1696.98 106.06 0.00 0.00 913067.27 326.10 2116510.33 00:07:02.979 00:07:02.979 real 0m8.789s 00:07:02.979 user 0m16.670s 00:07:02.979 sys 0m0.250s 00:07:02.979 09:38:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.979 09:38:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:02.979 ************************************ 00:07:02.979 END TEST bdev_verify_big_io 00:07:02.979 ************************************ 00:07:02.979 09:38:50 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.979 09:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:02.979 09:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.979 09:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.979 ************************************ 00:07:02.979 START TEST bdev_write_zeroes 00:07:02.979 ************************************ 00:07:02.979 09:38:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.979 [2024-12-05 09:38:50.208087] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:02.979 [2024-12-05 09:38:50.208203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62045 ] 00:07:02.979 [2024-12-05 09:38:50.362502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.979 [2024-12-05 09:38:50.442137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.545 Running I/O for 1 seconds... 00:07:04.480 36278.00 IOPS, 141.71 MiB/s 00:07:04.480 Latency(us) 00:07:04.480 [2024-12-05T09:38:52.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.480 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.480 Nvme0n1 : 1.03 4963.89 19.39 0.00 0.00 25732.17 5772.21 422656.79 00:07:04.480 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.480 Nvme1n1p1 : 1.03 5230.23 20.43 0.00 0.00 24385.66 10788.23 269403.37 00:07:04.480 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.480 Nvme1n1p2 : 1.03 5237.47 20.46 0.00 0.00 24290.20 10788.23 271016.57 00:07:04.480 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.480 Nvme2n1 : 1.03 5280.14 20.63 0.00 0.00 24043.01 10838.65 271016.57 00:07:04.480 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.480 Nvme2n2 : 1.03 5274.22 20.60 0.00 0.00 24010.68 10637.00 271016.57 00:07:04.480 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.480 Nvme2n3 : 1.03 5268.33 20.58 0.00 0.00 23933.94 6604.01 269403.37 00:07:04.480 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:04.480 Nvme3n1 : 1.03 5262.47 20.56 0.00 0.00 23920.74 6553.60 271016.57 00:07:04.480 [2024-12-05T09:38:52.109Z] =================================================================================================================== 00:07:04.480 [2024-12-05T09:38:52.109Z] Total : 36516.74 142.64 0.00 0.00 24318.06 5772.21 422656.79 00:07:05.415 00:07:05.415 real 0m2.627s 00:07:05.415 user 0m2.345s 00:07:05.415 sys 0m0.169s 00:07:05.415 09:38:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.415 09:38:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:05.415 ************************************ 00:07:05.415 END TEST bdev_write_zeroes 00:07:05.415 ************************************ 00:07:05.415 09:38:52 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.415 09:38:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:05.415 09:38:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.415 09:38:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.415 ************************************ 00:07:05.415 START TEST bdev_json_nonenclosed 00:07:05.415 ************************************ 00:07:05.415 09:38:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.415 [2024-12-05 09:38:52.868942] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:05.415 [2024-12-05 09:38:52.869059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62098 ] 00:07:05.415 [2024-12-05 09:38:53.029983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.672 [2024-12-05 09:38:53.126948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.672 [2024-12-05 09:38:53.127023] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:05.672 [2024-12-05 09:38:53.127039] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:05.672 [2024-12-05 09:38:53.127048] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.929 00:07:05.929 real 0m0.495s 00:07:05.929 user 0m0.298s 00:07:05.929 sys 0m0.093s 00:07:05.929 09:38:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.929 09:38:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:05.929 ************************************ 00:07:05.929 END TEST bdev_json_nonenclosed 00:07:05.929 ************************************ 00:07:05.929 09:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.929 09:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:05.929 09:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.929 09:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.929 ************************************ 00:07:05.929 START TEST bdev_json_nonarray 00:07:05.929 ************************************ 00:07:05.929 09:38:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.929 [2024-12-05 09:38:53.421133] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:05.929 [2024-12-05 09:38:53.421257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62118 ] 00:07:06.186 [2024-12-05 09:38:53.582566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.186 [2024-12-05 09:38:53.685027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.186 [2024-12-05 09:38:53.685123] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:06.186 [2024-12-05 09:38:53.685140] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:06.186 [2024-12-05 09:38:53.685149] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.444 00:07:06.444 real 0m0.511s 00:07:06.444 user 0m0.304s 00:07:06.444 sys 0m0.102s 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:06.444 ************************************ 00:07:06.444 END TEST bdev_json_nonarray 00:07:06.444 ************************************ 00:07:06.444 09:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:06.444 09:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:06.444 09:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:06.444 09:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.444 09:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.444 09:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.444 ************************************ 00:07:06.444 START TEST bdev_gpt_uuid 00:07:06.444 ************************************ 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62150 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62150 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62150 ']' 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:06.444 09:38:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:06.444 [2024-12-05 09:38:54.005725] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:06.444 [2024-12-05 09:38:54.005850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62150 ] 00:07:06.701 [2024-12-05 09:38:54.165485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.701 [2024-12-05 09:38:54.264521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.267 09:38:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.267 09:38:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:07.267 09:38:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.267 09:38:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.267 09:38:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:07.833 Some configs were skipped because the RPC state that can call them passed over. 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:07.833 { 00:07:07.833 "name": "Nvme1n1p1", 00:07:07.833 "aliases": [ 00:07:07.833 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:07.833 ], 00:07:07.833 "product_name": "GPT Disk", 00:07:07.833 "block_size": 4096, 00:07:07.833 "num_blocks": 655104, 00:07:07.833 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:07.833 "assigned_rate_limits": { 00:07:07.833 "rw_ios_per_sec": 0, 00:07:07.833 "rw_mbytes_per_sec": 0, 00:07:07.833 "r_mbytes_per_sec": 0, 00:07:07.833 "w_mbytes_per_sec": 0 00:07:07.833 }, 00:07:07.833 "claimed": false, 00:07:07.833 "zoned": false, 00:07:07.833 "supported_io_types": { 00:07:07.833 "read": true, 00:07:07.833 "write": true, 00:07:07.833 "unmap": true, 00:07:07.833 "flush": true, 00:07:07.833 "reset": true, 00:07:07.833 "nvme_admin": false, 00:07:07.833 "nvme_io": false, 00:07:07.833 "nvme_io_md": false, 00:07:07.833 "write_zeroes": true, 00:07:07.833 "zcopy": false, 00:07:07.833 "get_zone_info": false, 00:07:07.833 "zone_management": false, 00:07:07.833 "zone_append": false, 00:07:07.833 "compare": true, 00:07:07.833 "compare_and_write": false, 00:07:07.833 "abort": true, 00:07:07.833 "seek_hole": false, 00:07:07.833 "seek_data": false, 00:07:07.833 "copy": true, 00:07:07.833 "nvme_iov_md": false 00:07:07.833 }, 00:07:07.833 "driver_specific": { 00:07:07.833 "gpt": { 00:07:07.833 "base_bdev": "Nvme1n1", 00:07:07.833 "offset_blocks": 256, 00:07:07.833 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:07.833 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:07.833 "partition_name": "SPDK_TEST_first" 00:07:07.833 } 00:07:07.833 } 00:07:07.833 } 00:07:07.833 ]' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:07.833 { 00:07:07.833 "name": "Nvme1n1p2", 00:07:07.833 "aliases": [ 00:07:07.833 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:07.833 ], 00:07:07.833 "product_name": "GPT Disk", 00:07:07.833 "block_size": 4096, 00:07:07.833 "num_blocks": 655103, 00:07:07.833 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:07.833 "assigned_rate_limits": { 00:07:07.833 "rw_ios_per_sec": 0, 00:07:07.833 "rw_mbytes_per_sec": 0, 00:07:07.833 "r_mbytes_per_sec": 0, 00:07:07.833 "w_mbytes_per_sec": 0 00:07:07.833 }, 00:07:07.833 "claimed": false, 00:07:07.833 "zoned": false, 00:07:07.833 "supported_io_types": { 00:07:07.833 "read": true, 00:07:07.833 "write": true, 00:07:07.833 "unmap": true, 00:07:07.833 "flush": true, 00:07:07.833 "reset": true, 00:07:07.833 "nvme_admin": false, 00:07:07.833 "nvme_io": false, 00:07:07.833 "nvme_io_md": false, 00:07:07.833 "write_zeroes": true, 00:07:07.833 "zcopy": false, 00:07:07.833 "get_zone_info": false, 00:07:07.833 "zone_management": false, 00:07:07.833 "zone_append": false, 00:07:07.833 "compare": true, 00:07:07.833 "compare_and_write": false, 00:07:07.833 "abort": true, 00:07:07.833 "seek_hole": false, 00:07:07.833 "seek_data": false, 00:07:07.833 "copy": true, 00:07:07.833 "nvme_iov_md": false 00:07:07.833 }, 00:07:07.833 "driver_specific": { 00:07:07.833 "gpt": { 00:07:07.833 "base_bdev": "Nvme1n1", 00:07:07.833 "offset_blocks": 655360, 00:07:07.833 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:07.833 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:07.833 "partition_name": "SPDK_TEST_second" 00:07:07.833 } 00:07:07.833 } 00:07:07.833 } 00:07:07.833 ]' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62150 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62150 ']' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62150 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62150 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62150' 00:07:07.833 killing process with pid 62150 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62150 00:07:07.833 09:38:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62150 00:07:09.776 00:07:09.776 real 0m3.056s 00:07:09.776 user 0m3.174s 00:07:09.776 sys 0m0.373s 00:07:09.776 09:38:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.776 ************************************ 00:07:09.776 END TEST bdev_gpt_uuid 00:07:09.776 ************************************ 00:07:09.776 09:38:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:09.776 09:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:09.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:10.036 Waiting for block devices as requested 00:07:10.036 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.036 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.321 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.321 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.614 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:15.614 09:39:02 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:15.614 09:39:02 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:15.614 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:15.614 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:15.614 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:15.614 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:15.614 09:39:03 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:15.614 00:07:15.614 real 0m55.318s 00:07:15.614 user 1m11.147s 00:07:15.614 sys 0m7.541s 00:07:15.614 09:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.614 ************************************ 00:07:15.614 END TEST blockdev_nvme_gpt 00:07:15.614 ************************************ 00:07:15.614 09:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.614 09:39:03 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:15.614 09:39:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.614 09:39:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.614 09:39:03 -- common/autotest_common.sh@10 -- # set +x 00:07:15.614 ************************************ 00:07:15.614 START TEST nvme 00:07:15.614 ************************************ 00:07:15.614 09:39:03 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:15.873 * Looking for test storage... 00:07:15.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.873 09:39:03 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.873 09:39:03 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.873 09:39:03 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.873 09:39:03 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.873 09:39:03 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.873 09:39:03 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.873 09:39:03 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.873 09:39:03 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.873 09:39:03 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.873 09:39:03 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.873 09:39:03 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.873 09:39:03 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:15.873 09:39:03 nvme -- scripts/common.sh@345 -- # : 1 00:07:15.873 09:39:03 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.873 09:39:03 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.873 09:39:03 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:15.873 09:39:03 nvme -- scripts/common.sh@353 -- # local d=1 00:07:15.873 09:39:03 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.873 09:39:03 nvme -- scripts/common.sh@355 -- # echo 1 00:07:15.873 09:39:03 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.873 09:39:03 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:15.873 09:39:03 nvme -- scripts/common.sh@353 -- # local d=2 00:07:15.873 09:39:03 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.873 09:39:03 nvme -- scripts/common.sh@355 -- # echo 2 00:07:15.873 09:39:03 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.873 09:39:03 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.873 09:39:03 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.873 09:39:03 nvme -- scripts/common.sh@368 -- # return 0 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.873 --rc genhtml_branch_coverage=1 00:07:15.873 --rc genhtml_function_coverage=1 00:07:15.873 --rc genhtml_legend=1 00:07:15.873 --rc geninfo_all_blocks=1 00:07:15.873 --rc geninfo_unexecuted_blocks=1 00:07:15.873 00:07:15.873 ' 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.873 --rc genhtml_branch_coverage=1 00:07:15.873 --rc genhtml_function_coverage=1 00:07:15.873 --rc genhtml_legend=1 00:07:15.873 --rc geninfo_all_blocks=1 00:07:15.873 --rc geninfo_unexecuted_blocks=1 00:07:15.873 00:07:15.873 ' 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.873 --rc genhtml_branch_coverage=1 00:07:15.873 --rc genhtml_function_coverage=1 00:07:15.873 --rc genhtml_legend=1 00:07:15.873 --rc geninfo_all_blocks=1 00:07:15.873 --rc geninfo_unexecuted_blocks=1 00:07:15.873 00:07:15.873 ' 00:07:15.873 09:39:03 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.873 --rc genhtml_branch_coverage=1 00:07:15.873 --rc genhtml_function_coverage=1 00:07:15.873 --rc genhtml_legend=1 00:07:15.873 --rc geninfo_all_blocks=1 00:07:15.873 --rc geninfo_unexecuted_blocks=1 00:07:15.873 00:07:15.873 ' 00:07:15.873 09:39:03 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:16.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:16.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.700 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.700 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.700 09:39:04 nvme -- nvme/nvme.sh@79 -- # uname 00:07:16.700 09:39:04 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:16.700 09:39:04 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:16.700 09:39:04 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1075 -- # stubpid=62785 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:16.700 Waiting for stub to ready for secondary processes... 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62785 ]] 00:07:16.700 09:39:04 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:16.961 [2024-12-05 09:39:04.348945] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:16.961 [2024-12-05 09:39:04.349063] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:17.534 [2024-12-05 09:39:05.104646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.795 [2024-12-05 09:39:05.201195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.795 [2024-12-05 09:39:05.201473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.795 [2024-12-05 09:39:05.201505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.795 [2024-12-05 09:39:05.214825] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:17.796 [2024-12-05 09:39:05.214859] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:17.796 [2024-12-05 09:39:05.225085] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:17.796 [2024-12-05 09:39:05.225260] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:17.796 [2024-12-05 09:39:05.229101] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:17.796 [2024-12-05 09:39:05.229499] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:17.796 [2024-12-05 09:39:05.229624] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:17.796 [2024-12-05 09:39:05.233009] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:17.796 [2024-12-05 09:39:05.233250] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:17.796 [2024-12-05 09:39:05.233350] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:17.796 [2024-12-05 09:39:05.236837] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:17.796 [2024-12-05 09:39:05.236992] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:17.796 [2024-12-05 09:39:05.237042] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:17.796 [2024-12-05 09:39:05.237089] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:17.796 [2024-12-05 09:39:05.237116] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:17.796 done. 00:07:17.796 09:39:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:17.796 09:39:05 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:17.796 09:39:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:17.796 09:39:05 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:17.796 09:39:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.796 09:39:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.796 ************************************ 00:07:17.796 START TEST nvme_reset 00:07:17.796 ************************************ 00:07:17.796 09:39:05 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:18.056 Initializing NVMe Controllers 00:07:18.056 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:18.056 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:18.056 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:18.056 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:18.056 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:18.056 00:07:18.056 real 0m0.220s 00:07:18.056 user 0m0.079s 00:07:18.056 sys 0m0.090s 00:07:18.056 09:39:05 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.056 09:39:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:18.056 ************************************ 00:07:18.056 END TEST nvme_reset 00:07:18.056 ************************************ 00:07:18.056 09:39:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:18.056 09:39:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.056 09:39:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.056 09:39:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.056 ************************************ 00:07:18.056 START TEST nvme_identify 00:07:18.056 ************************************ 00:07:18.056 09:39:05 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:18.056 09:39:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:18.056 09:39:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:18.056 09:39:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:18.056 09:39:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:18.056 09:39:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:18.056 09:39:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:18.056 09:39:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:18.056 09:39:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:18.056 09:39:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:18.056 09:39:05 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:18.056 09:39:05 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:18.056 09:39:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:18.319 ===================================================== 00:07:18.319 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:18.319 ===================================================== 00:07:18.319 Controller Capabilities/Features 00:07:18.319 ================================ 00:07:18.319 Vendor ID: 1b36 00:07:18.319 Subsystem Vendor ID: 1af4 00:07:18.319 Serial Number: 12340 00:07:18.319 Model Number: QEMU NVMe Ctrl 00:07:18.319 Firmware Version: 8.0.0 00:07:18.319 Recommended Arb Burst: 6 00:07:18.319 IEEE OUI Identifier: 00 54 52 00:07:18.319 Multi-path I/O 00:07:18.319 May have multiple subsystem ports: No 00:07:18.319 May have multiple controllers: No 00:07:18.319 Associated with SR-IOV VF: No 00:07:18.319 Max Data Transfer Size: 524288 00:07:18.319 Max Number of Namespaces: 256 00:07:18.319 Max Number of I/O Queues: 64 00:07:18.319 NVMe Specification Version (VS): 1.4 00:07:18.319 NVMe Specification Version (Identify): 1.4 00:07:18.319 Maximum Queue Entries: 2048 00:07:18.319 Contiguous Queues Required: Yes 00:07:18.319 Arbitration Mechanisms Supported 00:07:18.319 Weighted Round Robin: Not Supported 00:07:18.319 Vendor Specific: Not Supported 00:07:18.319 Reset Timeout: 7500 ms 00:07:18.319 Doorbell Stride: 4 bytes 00:07:18.319 NVM Subsystem Reset: Not Supported 00:07:18.319 Command Sets Supported 00:07:18.319 NVM Command Set: Supported 00:07:18.319 Boot Partition: Not Supported 00:07:18.319 Memory Page Size Minimum: 4096 bytes 00:07:18.319 Memory Page Size Maximum: 65536 bytes 00:07:18.319 Persistent Memory Region: Not Supported 00:07:18.319 Optional Asynchronous Events Supported 00:07:18.319 Namespace Attribute Notices: Supported 00:07:18.319 Firmware Activation Notices: Not Supported 00:07:18.319 ANA Change Notices: Not Supported 00:07:18.319 PLE Aggregate Log Change Notices: Not Supported 00:07:18.319 LBA Status Info Alert Notices: Not Supported 00:07:18.319 EGE Aggregate Log Change Notices: Not Supported 00:07:18.319 Normal NVM Subsystem Shutdown event: Not Supported 00:07:18.319 Zone Descriptor Change Notices: Not Supported 00:07:18.320 Discovery Log Change Notices: Not Supported 00:07:18.320 Controller Attributes 00:07:18.320 128-bit Host Identifier: Not Supported 00:07:18.320 Non-Operational Permissive Mode: Not Supported 00:07:18.320 NVM Sets: Not Supported 00:07:18.320 Read Recovery Levels: Not Supported 00:07:18.320 Endurance Groups: Not Supported 00:07:18.320 Predictable Latency Mode: Not Supported 00:07:18.320 Traffic Based Keep ALive: Not Supported 00:07:18.320 Namespace Granularity: Not Supported 00:07:18.320 SQ Associations: Not Supported 00:07:18.320 UUID List: Not Supported 00:07:18.320 Multi-Domain Subsystem: Not Supported 00:07:18.320 Fixed Capacity Management: Not Supported 00:07:18.320 Variable Capacity Management: Not Supported 00:07:18.320 Delete Endurance Group: Not Supported 00:07:18.320 Delete NVM Set: Not Supported 00:07:18.320 Extended LBA Formats Supported: Supported 00:07:18.320 Flexible Data Placement Supported: Not Supported 00:07:18.320 00:07:18.320 Controller Memory Buffer Support 00:07:18.320 ================================ 00:07:18.320 Supported: No 00:07:18.320 00:07:18.320 Persistent Memory Region Support 00:07:18.320 ================================ 00:07:18.320 Supported: No 00:07:18.320 00:07:18.320 Admin Command Set Attributes 00:07:18.320 ============================ 00:07:18.320 Security Send/Receive: Not Supported 00:07:18.320 Format NVM: Supported 00:07:18.320 Firmware Activate/Download: Not Supported 00:07:18.320 Namespace Management: Supported 00:07:18.320 Device Self-Test: Not Supported 00:07:18.320 Directives: Supported 00:07:18.320 NVMe-MI: Not Supported 00:07:18.320 Virtualization Management: Not Supported 00:07:18.320 Doorbell Buffer Config: Supported 00:07:18.320 Get LBA Status Capability: Not Supported 00:07:18.320 Command & Feature Lockdown Capability: Not Supported 00:07:18.320 Abort Command Limit: 4 00:07:18.320 Async Event Request Limit: 4 00:07:18.320 Number of Firmware Slots: N/A 00:07:18.320 Firmware Slot 1 Read-Only: N/A 00:07:18.320 Firmware Activation Without Reset: N/A 00:07:18.320 Multiple Update Detection Support: N/A 00:07:18.320 Firmware Update Gr[2024-12-05 09:39:05.813541] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62806 terminated unexpected 00:07:18.320 anularity: No Information Provided 00:07:18.320 Per-Namespace SMART Log: Yes 00:07:18.320 Asymmetric Namespace Access Log Page: Not Supported 00:07:18.320 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:18.320 Command Effects Log Page: Supported 00:07:18.320 Get Log Page Extended Data: Supported 00:07:18.320 Telemetry Log Pages: Not Supported 00:07:18.320 Persistent Event Log Pages: Not Supported 00:07:18.320 Supported Log Pages Log Page: May Support 00:07:18.320 Commands Supported & Effects Log Page: Not Supported 00:07:18.320 Feature Identifiers & Effects Log Page:May Support 00:07:18.320 NVMe-MI Commands & Effects Log Page: May Support 00:07:18.320 Data Area 4 for Telemetry Log: Not Supported 00:07:18.320 Error Log Page Entries Supported: 1 00:07:18.320 Keep Alive: Not Supported 00:07:18.320 00:07:18.320 NVM Command Set Attributes 00:07:18.320 ========================== 00:07:18.320 Submission Queue Entry Size 00:07:18.320 Max: 64 00:07:18.320 Min: 64 00:07:18.320 Completion Queue Entry Size 00:07:18.320 Max: 16 00:07:18.320 Min: 16 00:07:18.320 Number of Namespaces: 256 00:07:18.320 Compare Command: Supported 00:07:18.320 Write Uncorrectable Command: Not Supported 00:07:18.320 Dataset Management Command: Supported 00:07:18.320 Write Zeroes Command: Supported 00:07:18.320 Set Features Save Field: Supported 00:07:18.320 Reservations: Not Supported 00:07:18.320 Timestamp: Supported 00:07:18.320 Copy: Supported 00:07:18.320 Volatile Write Cache: Present 00:07:18.320 Atomic Write Unit (Normal): 1 00:07:18.320 Atomic Write Unit (PFail): 1 00:07:18.320 Atomic Compare & Write Unit: 1 00:07:18.320 Fused Compare & Write: Not Supported 00:07:18.320 Scatter-Gather List 00:07:18.320 SGL Command Set: Supported 00:07:18.320 SGL Keyed: Not Supported 00:07:18.320 SGL Bit Bucket Descriptor: Not Supported 00:07:18.320 SGL Metadata Pointer: Not Supported 00:07:18.320 Oversized SGL: Not Supported 00:07:18.320 SGL Metadata Address: Not Supported 00:07:18.320 SGL Offset: Not Supported 00:07:18.320 Transport SGL Data Block: Not Supported 00:07:18.320 Replay Protected Memory Block: Not Supported 00:07:18.320 00:07:18.320 Firmware Slot Information 00:07:18.320 ========================= 00:07:18.320 Active slot: 1 00:07:18.320 Slot 1 Firmware Revision: 1.0 00:07:18.320 00:07:18.320 00:07:18.320 Commands Supported and Effects 00:07:18.320 ============================== 00:07:18.320 Admin Commands 00:07:18.320 -------------- 00:07:18.320 Delete I/O Submission Queue (00h): Supported 00:07:18.320 Create I/O Submission Queue (01h): Supported 00:07:18.320 Get Log Page (02h): Supported 00:07:18.320 Delete I/O Completion Queue (04h): Supported 00:07:18.320 Create I/O Completion Queue (05h): Supported 00:07:18.320 Identify (06h): Supported 00:07:18.320 Abort (08h): Supported 00:07:18.320 Set Features (09h): Supported 00:07:18.320 Get Features (0Ah): Supported 00:07:18.320 Asynchronous Event Request (0Ch): Supported 00:07:18.320 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:18.320 Directive Send (19h): Supported 00:07:18.320 Directive Receive (1Ah): Supported 00:07:18.320 Virtualization Management (1Ch): Supported 00:07:18.320 Doorbell Buffer Config (7Ch): Supported 00:07:18.320 Format NVM (80h): Supported LBA-Change 00:07:18.320 I/O Commands 00:07:18.320 ------------ 00:07:18.320 Flush (00h): Supported LBA-Change 00:07:18.320 Write (01h): Supported LBA-Change 00:07:18.320 Read (02h): Supported 00:07:18.320 Compare (05h): Supported 00:07:18.320 Write Zeroes (08h): Supported LBA-Change 00:07:18.320 Dataset Management (09h): Supported LBA-Change 00:07:18.320 Unknown (0Ch): Supported 00:07:18.320 Unknown (12h): Supported 00:07:18.320 Copy (19h): Supported LBA-Change 00:07:18.320 Unknown (1Dh): Supported LBA-Change 00:07:18.320 00:07:18.320 Error Log 00:07:18.320 ========= 00:07:18.320 00:07:18.320 Arbitration 00:07:18.320 =========== 00:07:18.320 Arbitration Burst: no limit 00:07:18.320 00:07:18.320 Power Management 00:07:18.320 ================ 00:07:18.320 Number of Power States: 1 00:07:18.320 Current Power State: Power State #0 00:07:18.320 Power State #0: 00:07:18.320 Max Power: 25.00 W 00:07:18.320 Non-Operational State: Operational 00:07:18.320 Entry Latency: 16 microseconds 00:07:18.320 Exit Latency: 4 microseconds 00:07:18.320 Relative Read Throughput: 0 00:07:18.320 Relative Read Latency: 0 00:07:18.320 Relative Write Throughput: 0 00:07:18.320 Relative Write Latency: 0 00:07:18.320 Idle Power[2024-12-05 09:39:05.814299] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62806 terminated unexpected 00:07:18.320 : Not Reported 00:07:18.320 Active Power: Not Reported 00:07:18.320 Non-Operational Permissive Mode: Not Supported 00:07:18.320 00:07:18.320 Health Information 00:07:18.320 ================== 00:07:18.320 Critical Warnings: 00:07:18.320 Available Spare Space: OK 00:07:18.320 Temperature: OK 00:07:18.320 Device Reliability: OK 00:07:18.320 Read Only: No 00:07:18.320 Volatile Memory Backup: OK 00:07:18.320 Current Temperature: 323 Kelvin (50 Celsius) 00:07:18.320 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:18.320 Available Spare: 0% 00:07:18.320 Available Spare Threshold: 0% 00:07:18.320 Life Percentage Used: 0% 00:07:18.320 Data Units Read: 717 00:07:18.320 Data Units Written: 645 00:07:18.320 Host Read Commands: 40783 00:07:18.320 Host Write Commands: 40569 00:07:18.320 Controller Busy Time: 0 minutes 00:07:18.320 Power Cycles: 0 00:07:18.320 Power On Hours: 0 hours 00:07:18.320 Unsafe Shutdowns: 0 00:07:18.320 Unrecoverable Media Errors: 0 00:07:18.320 Lifetime Error Log Entries: 0 00:07:18.320 Warning Temperature Time: 0 minutes 00:07:18.320 Critical Temperature Time: 0 minutes 00:07:18.320 00:07:18.320 Number of Queues 00:07:18.320 ================ 00:07:18.320 Number of I/O Submission Queues: 64 00:07:18.320 Number of I/O Completion Queues: 64 00:07:18.320 00:07:18.320 ZNS Specific Controller Data 00:07:18.320 ============================ 00:07:18.320 Zone Append Size Limit: 0 00:07:18.320 00:07:18.320 00:07:18.320 Active Namespaces 00:07:18.320 ================= 00:07:18.320 Namespace ID:1 00:07:18.320 Error Recovery Timeout: Unlimited 00:07:18.320 Command Set Identifier: NVM (00h) 00:07:18.320 Deallocate: Supported 00:07:18.320 Deallocated/Unwritten Error: Supported 00:07:18.320 Deallocated Read Value: All 0x00 00:07:18.320 Deallocate in Write Zeroes: Not Supported 00:07:18.320 Deallocated Guard Field: 0xFFFF 00:07:18.321 Flush: Supported 00:07:18.321 Reservation: Not Supported 00:07:18.321 Metadata Transferred as: Separate Metadata Buffer 00:07:18.321 Namespace Sharing Capabilities: Private 00:07:18.321 Size (in LBAs): 1548666 (5GiB) 00:07:18.321 Capacity (in LBAs): 1548666 (5GiB) 00:07:18.321 Utilization (in LBAs): 1548666 (5GiB) 00:07:18.321 Thin Provisioning: Not Supported 00:07:18.321 Per-NS Atomic Units: No 00:07:18.321 Maximum Single Source Range Length: 128 00:07:18.321 Maximum Copy Length: 128 00:07:18.321 Maximum Source Range Count: 128 00:07:18.321 NGUID/EUI64 Never Reused: No 00:07:18.321 Namespace Write Protected: No 00:07:18.321 Number of LBA Formats: 8 00:07:18.321 Current LBA Format: LBA Format #07 00:07:18.321 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:18.321 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:18.321 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:18.321 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:18.321 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:18.321 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:18.321 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:18.321 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:18.321 00:07:18.321 NVM Specific Namespace Data 00:07:18.321 =========================== 00:07:18.321 Logical Block Storage Tag Mask: 0 00:07:18.321 Protection Information Capabilities: 00:07:18.321 16b Guard Protection Information Storage Tag Support: No 00:07:18.321 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:18.321 Storage Tag Check Read Support: No 00:07:18.321 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.321 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.321 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.321 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.321 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.321 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.321 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.321 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.321 ===================================================== 00:07:18.321 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:18.321 ===================================================== 00:07:18.321 Controller Capabilities/Features 00:07:18.321 ================================ 00:07:18.321 Vendor ID: 1b36 00:07:18.321 Subsystem Vendor ID: 1af4 00:07:18.321 Serial Number: 12341 00:07:18.321 Model Number: QEMU NVMe Ctrl 00:07:18.321 Firmware Version: 8.0.0 00:07:18.321 Recommended Arb Burst: 6 00:07:18.321 IEEE OUI Identifier: 00 54 52 00:07:18.321 Multi-path I/O 00:07:18.321 May have multiple subsystem ports: No 00:07:18.321 May have multiple controllers: No 00:07:18.321 Associated with SR-IOV VF: No 00:07:18.321 Max Data Transfer Size: 524288 00:07:18.321 Max Number of Namespaces: 256 00:07:18.321 Max Number of I/O Queues: 64 00:07:18.321 NVMe Specification Version (VS): 1.4 00:07:18.321 NVMe Specification Version (Identify): 1.4 00:07:18.321 Maximum Queue Entries: 2048 00:07:18.321 Contiguous Queues Required: Yes 00:07:18.321 Arbitration Mechanisms Supported 00:07:18.321 Weighted Round Robin: Not Supported 00:07:18.321 Vendor Specific: Not Supported 00:07:18.321 Reset Timeout: 7500 ms 00:07:18.321 Doorbell Stride: 4 bytes 00:07:18.321 NVM Subsystem Reset: Not Supported 00:07:18.321 Command Sets Supported 00:07:18.321 NVM Command Set: Supported 00:07:18.321 Boot Partition: Not Supported 00:07:18.321 Memory Page Size Minimum: 4096 bytes 00:07:18.321 Memory Page Size Maximum: 65536 bytes 00:07:18.321 Persistent Memory Region: Not Supported 00:07:18.321 Optional Asynchronous Events Supported 00:07:18.321 Namespace Attribute Notices: Supported 00:07:18.321 Firmware Activation Notices: Not Supported 00:07:18.321 ANA Change Notices: Not Supported 00:07:18.321 PLE Aggregate Log Change Notices: Not Supported 00:07:18.321 LBA Status Info Alert Notices: Not Supported 00:07:18.321 EGE Aggregate Log Change Notices: Not Supported 00:07:18.321 Normal NVM Subsystem Shutdown event: Not Supported 00:07:18.321 Zone Descriptor Change Notices: Not Supported 00:07:18.321 Discovery Log Change Notices: Not Supported 00:07:18.321 Controller Attributes 00:07:18.321 128-bit Host Identifier: Not Supported 00:07:18.321 Non-Operational Permissive Mode: Not Supported 00:07:18.321 NVM Sets: Not Supported 00:07:18.321 Read Recovery Levels: Not Supported 00:07:18.321 Endurance Groups: Not Supported 00:07:18.321 Predictable Latency Mode: Not Supported 00:07:18.321 Traffic Based Keep ALive: Not Supported 00:07:18.321 Namespace Granularity: Not Supported 00:07:18.321 SQ Associations: Not Supported 00:07:18.321 UUID List: Not Supported 00:07:18.321 Multi-Domain Subsystem: Not Supported 00:07:18.321 Fixed Capacity Management: Not Supported 00:07:18.321 Variable Capacity Management: Not Supported 00:07:18.321 Delete Endurance Group: Not Supported 00:07:18.321 Delete NVM Set: Not Supported 00:07:18.321 Extended LBA Formats Supported: Supported 00:07:18.321 Flexible Data Placement Supported: Not Supported 00:07:18.321 00:07:18.321 Controller Memory Buffer Support 00:07:18.321 ================================ 00:07:18.321 Supported: No 00:07:18.321 00:07:18.321 Persistent Memory Region Support 00:07:18.321 ================================ 00:07:18.321 Supported: No 00:07:18.321 00:07:18.321 Admin Command Set Attributes 00:07:18.321 ============================ 00:07:18.321 Security Send/Receive: Not Supported 00:07:18.321 Format NVM: Supported 00:07:18.321 Firmware Activate/Download: Not Supported 00:07:18.321 Namespace Management: Supported 00:07:18.321 Device Self-Test: Not Supported 00:07:18.321 Directives: Supported 00:07:18.321 NVMe-MI: Not Supported 00:07:18.321 Virtualization Management: Not Supported 00:07:18.321 Doorbell Buffer Config: Supported 00:07:18.321 Get LBA Status Capability: Not Supported 00:07:18.321 Command & Feature Lockdown Capability: Not Supported 00:07:18.321 Abort Command Limit: 4 00:07:18.321 Async Event Request Limit: 4 00:07:18.321 Number of Firmware Slots: N/A 00:07:18.321 Firmware Slot 1 Read-Only: N/A 00:07:18.321 Firmware Activation Without Reset: N/A 00:07:18.321 Multiple Update Detection Support: N/A 00:07:18.321 Firmware Update Granularity: No Information Provided 00:07:18.321 Per-Namespace SMART Log: Yes 00:07:18.321 Asymmetric Namespace Access Log Page: Not Supported 00:07:18.321 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:18.321 Command Effects Log Page: Supported 00:07:18.321 Get Log Page Extended Data: Supported 00:07:18.321 Telemetry Log Pages: Not Supported 00:07:18.321 Persistent Event Log Pages: Not Supported 00:07:18.321 Supported Log Pages Log Page: May Support 00:07:18.321 Commands Supported & Effects Log Page: Not Supported 00:07:18.321 Feature Identifiers & Effects Log Page:May Support 00:07:18.321 NVMe-MI Commands & Effects Log Page: May Support 00:07:18.321 Data Area 4 for Telemetry Log: Not Supported 00:07:18.321 Error Log Page Entries Supported: 1 00:07:18.321 Keep Alive: Not Supported 00:07:18.321 00:07:18.321 NVM Command Set Attributes 00:07:18.321 ========================== 00:07:18.321 Submission Queue Entry Size 00:07:18.321 Max: 64 00:07:18.321 Min: 64 00:07:18.321 Completion Queue Entry Size 00:07:18.321 Max: 16 00:07:18.321 Min: 16 00:07:18.321 Number of Namespaces: 256 00:07:18.321 Compare Command: Supported 00:07:18.321 Write Uncorrectable Command: Not Supported 00:07:18.321 Dataset Management Command: Supported 00:07:18.321 Write Zeroes Command: Supported 00:07:18.321 Set Features Save Field: Supported 00:07:18.321 Reservations: Not Supported 00:07:18.321 Timestamp: Supported 00:07:18.321 Copy: Supported 00:07:18.321 Volatile Write Cache: Present 00:07:18.321 Atomic Write Unit (Normal): 1 00:07:18.321 Atomic Write Unit (PFail): 1 00:07:18.321 Atomic Compare & Write Unit: 1 00:07:18.321 Fused Compare & Write: Not Supported 00:07:18.321 Scatter-Gather List 00:07:18.321 SGL Command Set: Supported 00:07:18.321 SGL Keyed: Not Supported 00:07:18.321 SGL Bit Bucket Descriptor: Not Supported 00:07:18.321 SGL Metadata Pointer: Not Supported 00:07:18.321 Oversized SGL: Not Supported 00:07:18.321 SGL Metadata Address: Not Supported 00:07:18.321 SGL Offset: Not Supported 00:07:18.321 Transport SGL Data Block: Not Supported 00:07:18.321 Replay Protected Memory Block: Not Supported 00:07:18.321 00:07:18.321 Firmware Slot Information 00:07:18.321 ========================= 00:07:18.321 Active slot: 1 00:07:18.321 Slot 1 Firmware Revision: 1.0 00:07:18.321 00:07:18.321 00:07:18.321 Commands Supported and Effects 00:07:18.321 ============================== 00:07:18.321 Admin Commands 00:07:18.321 -------------- 00:07:18.321 Delete I/O Submission Queue (00h): Supported 00:07:18.322 Create I/O Submission Queue (01h): Supported 00:07:18.322 Get Log Page (02h): Supported 00:07:18.322 Delete I/O Completion Queue (04h): Supported 00:07:18.322 Create I/O Completion Queue (05h): Supported 00:07:18.322 Identify (06h): Supported 00:07:18.322 Abort (08h): Supported 00:07:18.322 Set Features (09h): Supported 00:07:18.322 Get Features (0Ah): Supported 00:07:18.322 Asynchronous Event Request (0Ch): Supported 00:07:18.322 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:18.322 Directive Send (19h): Supported 00:07:18.322 Directive Receive (1Ah): Supported 00:07:18.322 Virtualization Management (1Ch): Supported 00:07:18.322 Doorbell Buffer Config (7Ch): Supported 00:07:18.322 Format NVM (80h): Supported LBA-Change 00:07:18.322 I/O Commands 00:07:18.322 ------------ 00:07:18.322 Flush (00h): Supported LBA-Change 00:07:18.322 Write (01h): Supported LBA-Change 00:07:18.322 Read (02h): Supported 00:07:18.322 Compare (05h): Supported 00:07:18.322 Write Zeroes (08h): Supported LBA-Change 00:07:18.322 Dataset Management (09h): Supported LBA-Change 00:07:18.322 Unknown (0Ch): Supported 00:07:18.322 Unknown (12h): Supported 00:07:18.322 Copy (19h): Supported LBA-Change 00:07:18.322 Unknown (1Dh): Supported LBA-Change 00:07:18.322 00:07:18.322 Error Log 00:07:18.322 ========= 00:07:18.322 00:07:18.322 Arbitration 00:07:18.322 =========== 00:07:18.322 Arbitration Burst: no limit 00:07:18.322 00:07:18.322 Power Management 00:07:18.322 ================ 00:07:18.322 Number of Power States: 1 00:07:18.322 Current Power State: Power State #0 00:07:18.322 Power State #0: 00:07:18.322 Max Power: 25.00 W 00:07:18.322 Non-Operational State: Operational 00:07:18.322 Entry Latency: 16 microseconds 00:07:18.322 Exit Latency: 4 microseconds 00:07:18.322 Relative Read Throughput: 0 00:07:18.322 Relative Read Latency: 0 00:07:18.322 Relative Write Throughput: 0 00:07:18.322 Relative Write Latency: 0 00:07:18.322 Idle Power: Not Reported 00:07:18.322 Active Power: Not Reported 00:07:18.322 Non-Operational Permissive Mode: Not Supported 00:07:18.322 00:07:18.322 Health Information 00:07:18.322 ================== 00:07:18.322 Critical Warnings: 00:07:18.322 Available Spare Space: OK 00:07:18.322 Temperature: [2024-12-05 09:39:05.815275] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62806 terminated unexpected 00:07:18.322 OK 00:07:18.322 Device Reliability: OK 00:07:18.322 Read Only: No 00:07:18.322 Volatile Memory Backup: OK 00:07:18.322 Current Temperature: 323 Kelvin (50 Celsius) 00:07:18.322 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:18.322 Available Spare: 0% 00:07:18.322 Available Spare Threshold: 0% 00:07:18.322 Life Percentage Used: 0% 00:07:18.322 Data Units Read: 1099 00:07:18.322 Data Units Written: 960 00:07:18.322 Host Read Commands: 59625 00:07:18.322 Host Write Commands: 58329 00:07:18.322 Controller Busy Time: 0 minutes 00:07:18.322 Power Cycles: 0 00:07:18.322 Power On Hours: 0 hours 00:07:18.322 Unsafe Shutdowns: 0 00:07:18.322 Unrecoverable Media Errors: 0 00:07:18.322 Lifetime Error Log Entries: 0 00:07:18.322 Warning Temperature Time: 0 minutes 00:07:18.322 Critical Temperature Time: 0 minutes 00:07:18.322 00:07:18.322 Number of Queues 00:07:18.322 ================ 00:07:18.322 Number of I/O Submission Queues: 64 00:07:18.322 Number of I/O Completion Queues: 64 00:07:18.322 00:07:18.322 ZNS Specific Controller Data 00:07:18.322 ============================ 00:07:18.322 Zone Append Size Limit: 0 00:07:18.322 00:07:18.322 00:07:18.322 Active Namespaces 00:07:18.322 ================= 00:07:18.322 Namespace ID:1 00:07:18.322 Error Recovery Timeout: Unlimited 00:07:18.322 Command Set Identifier: NVM (00h) 00:07:18.322 Deallocate: Supported 00:07:18.322 Deallocated/Unwritten Error: Supported 00:07:18.322 Deallocated Read Value: All 0x00 00:07:18.322 Deallocate in Write Zeroes: Not Supported 00:07:18.322 Deallocated Guard Field: 0xFFFF 00:07:18.322 Flush: Supported 00:07:18.322 Reservation: Not Supported 00:07:18.322 Namespace Sharing Capabilities: Private 00:07:18.322 Size (in LBAs): 1310720 (5GiB) 00:07:18.322 Capacity (in LBAs): 1310720 (5GiB) 00:07:18.322 Utilization (in LBAs): 1310720 (5GiB) 00:07:18.322 Thin Provisioning: Not Supported 00:07:18.322 Per-NS Atomic Units: No 00:07:18.322 Maximum Single Source Range Length: 128 00:07:18.322 Maximum Copy Length: 128 00:07:18.322 Maximum Source Range Count: 128 00:07:18.322 NGUID/EUI64 Never Reused: No 00:07:18.322 Namespace Write Protected: No 00:07:18.322 Number of LBA Formats: 8 00:07:18.322 Current LBA Format: LBA Format #04 00:07:18.322 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:18.322 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:18.322 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:18.322 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:18.322 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:18.322 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:18.322 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:18.322 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:18.322 00:07:18.322 NVM Specific Namespace Data 00:07:18.322 =========================== 00:07:18.322 Logical Block Storage Tag Mask: 0 00:07:18.322 Protection Information Capabilities: 00:07:18.322 16b Guard Protection Information Storage Tag Support: No 00:07:18.322 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:18.322 Storage Tag Check Read Support: No 00:07:18.322 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.322 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.322 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.322 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.322 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.322 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.322 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.322 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.322 ===================================================== 00:07:18.322 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:18.322 ===================================================== 00:07:18.322 Controller Capabilities/Features 00:07:18.322 ================================ 00:07:18.322 Vendor ID: 1b36 00:07:18.322 Subsystem Vendor ID: 1af4 00:07:18.322 Serial Number: 12343 00:07:18.322 Model Number: QEMU NVMe Ctrl 00:07:18.322 Firmware Version: 8.0.0 00:07:18.322 Recommended Arb Burst: 6 00:07:18.322 IEEE OUI Identifier: 00 54 52 00:07:18.322 Multi-path I/O 00:07:18.322 May have multiple subsystem ports: No 00:07:18.322 May have multiple controllers: Yes 00:07:18.322 Associated with SR-IOV VF: No 00:07:18.322 Max Data Transfer Size: 524288 00:07:18.322 Max Number of Namespaces: 256 00:07:18.322 Max Number of I/O Queues: 64 00:07:18.322 NVMe Specification Version (VS): 1.4 00:07:18.322 NVMe Specification Version (Identify): 1.4 00:07:18.322 Maximum Queue Entries: 2048 00:07:18.322 Contiguous Queues Required: Yes 00:07:18.322 Arbitration Mechanisms Supported 00:07:18.322 Weighted Round Robin: Not Supported 00:07:18.322 Vendor Specific: Not Supported 00:07:18.322 Reset Timeout: 7500 ms 00:07:18.322 Doorbell Stride: 4 bytes 00:07:18.322 NVM Subsystem Reset: Not Supported 00:07:18.322 Command Sets Supported 00:07:18.322 NVM Command Set: Supported 00:07:18.322 Boot Partition: Not Supported 00:07:18.322 Memory Page Size Minimum: 4096 bytes 00:07:18.322 Memory Page Size Maximum: 65536 bytes 00:07:18.322 Persistent Memory Region: Not Supported 00:07:18.322 Optional Asynchronous Events Supported 00:07:18.322 Namespace Attribute Notices: Supported 00:07:18.322 Firmware Activation Notices: Not Supported 00:07:18.322 ANA Change Notices: Not Supported 00:07:18.322 PLE Aggregate Log Change Notices: Not Supported 00:07:18.322 LBA Status Info Alert Notices: Not Supported 00:07:18.322 EGE Aggregate Log Change Notices: Not Supported 00:07:18.322 Normal NVM Subsystem Shutdown event: Not Supported 00:07:18.322 Zone Descriptor Change Notices: Not Supported 00:07:18.322 Discovery Log Change Notices: Not Supported 00:07:18.322 Controller Attributes 00:07:18.322 128-bit Host Identifier: Not Supported 00:07:18.322 Non-Operational Permissive Mode: Not Supported 00:07:18.322 NVM Sets: Not Supported 00:07:18.322 Read Recovery Levels: Not Supported 00:07:18.322 Endurance Groups: Supported 00:07:18.322 Predictable Latency Mode: Not Supported 00:07:18.322 Traffic Based Keep ALive: Not Supported 00:07:18.322 Namespace Granularity: Not Supported 00:07:18.322 SQ Associations: Not Supported 00:07:18.322 UUID List: Not Supported 00:07:18.322 Multi-Domain Subsystem: Not Supported 00:07:18.322 Fixed Capacity Management: Not Supported 00:07:18.322 Variable Capacity Management: Not Supported 00:07:18.323 Delete Endurance Group: Not Supported 00:07:18.323 Delete NVM Set: Not Supported 00:07:18.323 Extended LBA Formats Supported: Supported 00:07:18.323 Flexible Data Placement Supported: Supported 00:07:18.323 00:07:18.323 Controller Memory Buffer Support 00:07:18.323 ================================ 00:07:18.323 Supported: No 00:07:18.323 00:07:18.323 Persistent Memory Region Support 00:07:18.323 ================================ 00:07:18.323 Supported: No 00:07:18.323 00:07:18.323 Admin Command Set Attributes 00:07:18.323 ============================ 00:07:18.323 Security Send/Receive: Not Supported 00:07:18.323 Format NVM: Supported 00:07:18.323 Firmware Activate/Download: Not Supported 00:07:18.323 Namespace Management: Supported 00:07:18.323 Device Self-Test: Not Supported 00:07:18.323 Directives: Supported 00:07:18.323 NVMe-MI: Not Supported 00:07:18.323 Virtualization Management: Not Supported 00:07:18.323 Doorbell Buffer Config: Supported 00:07:18.323 Get LBA Status Capability: Not Supported 00:07:18.323 Command & Feature Lockdown Capability: Not Supported 00:07:18.323 Abort Command Limit: 4 00:07:18.323 Async Event Request Limit: 4 00:07:18.323 Number of Firmware Slots: N/A 00:07:18.323 Firmware Slot 1 Read-Only: N/A 00:07:18.323 Firmware Activation Without Reset: N/A 00:07:18.323 Multiple Update Detection Support: N/A 00:07:18.323 Firmware Update Granularity: No Information Provided 00:07:18.323 Per-Namespace SMART Log: Yes 00:07:18.323 Asymmetric Namespace Access Log Page: Not Supported 00:07:18.323 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:18.323 Command Effects Log Page: Supported 00:07:18.323 Get Log Page Extended Data: Supported 00:07:18.323 Telemetry Log Pages: Not Supported 00:07:18.323 Persistent Event Log Pages: Not Supported 00:07:18.323 Supported Log Pages Log Page: May Support 00:07:18.323 Commands Supported & Effects Log Page: Not Supported 00:07:18.323 Feature Identifiers & Effects Log Page:May Support 00:07:18.323 NVMe-MI Commands & Effects Log Page: May Support 00:07:18.323 Data Area 4 for Telemetry Log: Not Supported 00:07:18.323 Error Log Page Entries Supported: 1 00:07:18.323 Keep Alive: Not Supported 00:07:18.323 00:07:18.323 NVM Command Set Attributes 00:07:18.323 ========================== 00:07:18.323 Submission Queue Entry Size 00:07:18.323 Max: 64 00:07:18.323 Min: 64 00:07:18.323 Completion Queue Entry Size 00:07:18.323 Max: 16 00:07:18.323 Min: 16 00:07:18.323 Number of Namespaces: 256 00:07:18.323 Compare Command: Supported 00:07:18.323 Write Uncorrectable Command: Not Supported 00:07:18.323 Dataset Management Command: Supported 00:07:18.323 Write Zeroes Command: Supported 00:07:18.323 Set Features Save Field: Supported 00:07:18.323 Reservations: Not Supported 00:07:18.323 Timestamp: Supported 00:07:18.323 Copy: Supported 00:07:18.323 Volatile Write Cache: Present 00:07:18.323 Atomic Write Unit (Normal): 1 00:07:18.323 Atomic Write Unit (PFail): 1 00:07:18.323 Atomic Compare & Write Unit: 1 00:07:18.323 Fused Compare & Write: Not Supported 00:07:18.323 Scatter-Gather List 00:07:18.323 SGL Command Set: Supported 00:07:18.323 SGL Keyed: Not Supported 00:07:18.323 SGL Bit Bucket Descriptor: Not Supported 00:07:18.323 SGL Metadata Pointer: Not Supported 00:07:18.323 Oversized SGL: Not Supported 00:07:18.323 SGL Metadata Address: Not Supported 00:07:18.323 SGL Offset: Not Supported 00:07:18.323 Transport SGL Data Block: Not Supported 00:07:18.323 Replay Protected Memory Block: Not Supported 00:07:18.323 00:07:18.323 Firmware Slot Information 00:07:18.323 ========================= 00:07:18.323 Active slot: 1 00:07:18.323 Slot 1 Firmware Revision: 1.0 00:07:18.323 00:07:18.323 00:07:18.323 Commands Supported and Effects 00:07:18.323 ============================== 00:07:18.323 Admin Commands 00:07:18.323 -------------- 00:07:18.323 Delete I/O Submission Queue (00h): Supported 00:07:18.323 Create I/O Submission Queue (01h): Supported 00:07:18.323 Get Log Page (02h): Supported 00:07:18.323 Delete I/O Completion Queue (04h): Supported 00:07:18.323 Create I/O Completion Queue (05h): Supported 00:07:18.323 Identify (06h): Supported 00:07:18.323 Abort (08h): Supported 00:07:18.323 Set Features (09h): Supported 00:07:18.323 Get Features (0Ah): Supported 00:07:18.323 Asynchronous Event Request (0Ch): Supported 00:07:18.323 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:18.323 Directive Send (19h): Supported 00:07:18.323 Directive Receive (1Ah): Supported 00:07:18.323 Virtualization Management (1Ch): Supported 00:07:18.323 Doorbell Buffer Config (7Ch): Supported 00:07:18.323 Format NVM (80h): Supported LBA-Change 00:07:18.323 I/O Commands 00:07:18.323 ------------ 00:07:18.323 Flush (00h): Supported LBA-Change 00:07:18.323 Write (01h): Supported LBA-Change 00:07:18.323 Read (02h): Supported 00:07:18.323 Compare (05h): Supported 00:07:18.323 Write Zeroes (08h): Supported LBA-Change 00:07:18.323 Dataset Management (09h): Supported LBA-Change 00:07:18.323 Unknown (0Ch): Supported 00:07:18.323 Unknown (12h): Supported 00:07:18.323 Copy (19h): Supported LBA-Change 00:07:18.323 Unknown (1Dh): Supported LBA-Change 00:07:18.323 00:07:18.323 Error Log 00:07:18.323 ========= 00:07:18.323 00:07:18.323 Arbitration 00:07:18.323 =========== 00:07:18.323 Arbitration Burst: no limit 00:07:18.323 00:07:18.323 Power Management 00:07:18.323 ================ 00:07:18.323 Number of Power States: 1 00:07:18.323 Current Power State: Power State #0 00:07:18.323 Power State #0: 00:07:18.323 Max Power: 25.00 W 00:07:18.323 Non-Operational State: Operational 00:07:18.323 Entry Latency: 16 microseconds 00:07:18.323 Exit Latency: 4 microseconds 00:07:18.323 Relative Read Throughput: 0 00:07:18.323 Relative Read Latency: 0 00:07:18.323 Relative Write Throughput: 0 00:07:18.323 Relative Write Latency: 0 00:07:18.323 Idle Power: Not Reported 00:07:18.323 Active Power: Not Reported 00:07:18.323 Non-Operational Permissive Mode: Not Supported 00:07:18.323 00:07:18.323 Health Information 00:07:18.323 ================== 00:07:18.323 Critical Warnings: 00:07:18.323 Available Spare Space: OK 00:07:18.323 Temperature: OK 00:07:18.323 Device Reliability: OK 00:07:18.323 Read Only: No 00:07:18.323 Volatile Memory Backup: OK 00:07:18.323 Current Temperature: 323 Kelvin (50 Celsius) 00:07:18.323 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:18.323 Available Spare: 0% 00:07:18.323 Available Spare Threshold: 0% 00:07:18.323 Life Percentage Used: 0% 00:07:18.323 Data Units Read: 884 00:07:18.323 Data Units Written: 813 00:07:18.323 Host Read Commands: 42621 00:07:18.323 Host Write Commands: 42045 00:07:18.323 Controller Busy Time: 0 minutes 00:07:18.323 Power Cycles: 0 00:07:18.323 Power On Hours: 0 hours 00:07:18.323 Unsafe Shutdowns: 0 00:07:18.323 Unrecoverable Media Errors: 0 00:07:18.323 Lifetime Error Log Entries: 0 00:07:18.323 Warning Temperature Time: 0 minutes 00:07:18.323 Critical Temperature Time: 0 minutes 00:07:18.323 00:07:18.323 Number of Queues 00:07:18.323 ================ 00:07:18.323 Number of I/O Submission Queues: 64 00:07:18.323 Number of I/O Completion Queues: 64 00:07:18.323 00:07:18.323 ZNS Specific Controller Data 00:07:18.323 ============================ 00:07:18.323 Zone Append Size Limit: 0 00:07:18.323 00:07:18.323 00:07:18.323 Active Namespaces 00:07:18.323 ================= 00:07:18.323 Namespace ID:1 00:07:18.323 Error Recovery Timeout: Unlimited 00:07:18.323 Command Set Identifier: NVM (00h) 00:07:18.323 Deallocate: Supported 00:07:18.323 Deallocated/Unwritten Error: Supported 00:07:18.323 Deallocated Read Value: All 0x00 00:07:18.323 Deallocate in Write Zeroes: Not Supported 00:07:18.323 Deallocated Guard Field: 0xFFFF 00:07:18.323 Flush: Supported 00:07:18.323 Reservation: Not Supported 00:07:18.323 Namespace Sharing Capabilities: Multiple Controllers 00:07:18.323 Size (in LBAs): 262144 (1GiB) 00:07:18.323 Capacity (in LBAs): 262144 (1GiB) 00:07:18.323 Utilization (in LBAs): 262144 (1GiB) 00:07:18.323 Thin Provisioning: Not Supported 00:07:18.323 Per-NS Atomic Units: No 00:07:18.323 Maximum Single Source Range Length: 128 00:07:18.323 Maximum Copy Length: 128 00:07:18.323 Maximum Source Range Count: 128 00:07:18.323 NGUID/EUI64 Never Reused: No 00:07:18.323 Namespace Write Protected: No 00:07:18.323 Endurance group ID: 1 00:07:18.323 Number of LBA Formats: 8 00:07:18.323 Current LBA Format: LBA Format #04 00:07:18.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:18.323 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:18.323 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:18.323 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:18.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:18.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:18.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:18.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:18.324 00:07:18.324 Get Feature FDP: 00:07:18.324 ================ 00:07:18.324 Enabled: Yes 00:07:18.324 FDP configuration index: 0 00:07:18.324 00:07:18.324 FDP configurations log page 00:07:18.324 =========================== 00:07:18.324 Number of FDP configurations: 1 00:07:18.324 Version: 0 00:07:18.324 Size: 112 00:07:18.324 FDP Configuration Descriptor: 0 00:07:18.324 Descriptor Size: 96 00:07:18.324 Reclaim Group Identifier format: 2 00:07:18.324 FDP Volatile Write Cache: Not Present 00:07:18.324 FDP Configuration: Valid 00:07:18.324 Vendor Specific Size: 0 00:07:18.324 Number of Reclaim Groups: 2 00:07:18.324 Number of Recalim Unit Handles: 8 00:07:18.324 Max Placement Identifiers: 128 00:07:18.324 Number of Namespaces Suppprted: 256 00:07:18.324 Reclaim unit Nominal Size: 6000000 bytes 00:07:18.324 Estimated Reclaim Unit Time Limit: Not Reported 00:07:18.324 RUH Desc #000: RUH Type: Initially Isolated 00:07:18.324 RUH Desc #001: RUH Type: Initially Isolated 00:07:18.324 RUH Desc #002: RUH Type: Initially Isolated 00:07:18.324 RUH Desc #003: RUH Type: Initially Isolated 00:07:18.324 RUH Desc #004: RUH Type: Initially Isolated 00:07:18.324 RUH Desc #005: RUH Type: Initially Isolated 00:07:18.324 RUH Desc #006: RUH Type: Initially Isolated 00:07:18.324 RUH Desc #007: RUH Type: Initially Isolated 00:07:18.324 00:07:18.324 FDP reclaim unit handle usage log page 00:07:18.324 ====================================== 00:07:18.324 Number of Reclaim Unit Handles: 8 00:07:18.324 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:18.324 RUH Usage Desc #001: RUH Attributes: Unused 00:07:18.324 RUH Usage Desc #002: RUH Attributes: Unused 00:07:18.324 RUH Usage Desc #003: RUH Attributes: Unused 00:07:18.324 RUH Usage Desc #004: RUH Attributes: Unused 00:07:18.324 RUH Usage Desc #005: RUH Attributes: Unused 00:07:18.324 RUH Usage Desc #006: RUH Attributes: Unused 00:07:18.324 RUH Usage Desc #007: RUH Attributes: Unused 00:07:18.324 00:07:18.324 FDP statistics log page 00:07:18.324 ======================= 00:07:18.324 Host bytes with metadata written: 508928000 00:07:18.324 Medi[2024-12-05 09:39:05.816422] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62806 terminated unexpected 00:07:18.324 a bytes with metadata written: 508985344 00:07:18.324 Media bytes erased: 0 00:07:18.324 00:07:18.324 FDP events log page 00:07:18.324 =================== 00:07:18.324 Number of FDP events: 0 00:07:18.324 00:07:18.324 NVM Specific Namespace Data 00:07:18.324 =========================== 00:07:18.324 Logical Block Storage Tag Mask: 0 00:07:18.324 Protection Information Capabilities: 00:07:18.324 16b Guard Protection Information Storage Tag Support: No 00:07:18.324 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:18.324 Storage Tag Check Read Support: No 00:07:18.324 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.324 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.324 ===================================================== 00:07:18.324 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:18.324 ===================================================== 00:07:18.324 Controller Capabilities/Features 00:07:18.324 ================================ 00:07:18.324 Vendor ID: 1b36 00:07:18.324 Subsystem Vendor ID: 1af4 00:07:18.324 Serial Number: 12342 00:07:18.324 Model Number: QEMU NVMe Ctrl 00:07:18.324 Firmware Version: 8.0.0 00:07:18.324 Recommended Arb Burst: 6 00:07:18.324 IEEE OUI Identifier: 00 54 52 00:07:18.324 Multi-path I/O 00:07:18.324 May have multiple subsystem ports: No 00:07:18.324 May have multiple controllers: No 00:07:18.324 Associated with SR-IOV VF: No 00:07:18.324 Max Data Transfer Size: 524288 00:07:18.324 Max Number of Namespaces: 256 00:07:18.324 Max Number of I/O Queues: 64 00:07:18.324 NVMe Specification Version (VS): 1.4 00:07:18.324 NVMe Specification Version (Identify): 1.4 00:07:18.324 Maximum Queue Entries: 2048 00:07:18.324 Contiguous Queues Required: Yes 00:07:18.324 Arbitration Mechanisms Supported 00:07:18.324 Weighted Round Robin: Not Supported 00:07:18.324 Vendor Specific: Not Supported 00:07:18.324 Reset Timeout: 7500 ms 00:07:18.324 Doorbell Stride: 4 bytes 00:07:18.324 NVM Subsystem Reset: Not Supported 00:07:18.324 Command Sets Supported 00:07:18.324 NVM Command Set: Supported 00:07:18.324 Boot Partition: Not Supported 00:07:18.324 Memory Page Size Minimum: 4096 bytes 00:07:18.324 Memory Page Size Maximum: 65536 bytes 00:07:18.324 Persistent Memory Region: Not Supported 00:07:18.324 Optional Asynchronous Events Supported 00:07:18.324 Namespace Attribute Notices: Supported 00:07:18.324 Firmware Activation Notices: Not Supported 00:07:18.324 ANA Change Notices: Not Supported 00:07:18.324 PLE Aggregate Log Change Notices: Not Supported 00:07:18.324 LBA Status Info Alert Notices: Not Supported 00:07:18.324 EGE Aggregate Log Change Notices: Not Supported 00:07:18.324 Normal NVM Subsystem Shutdown event: Not Supported 00:07:18.324 Zone Descriptor Change Notices: Not Supported 00:07:18.324 Discovery Log Change Notices: Not Supported 00:07:18.324 Controller Attributes 00:07:18.324 128-bit Host Identifier: Not Supported 00:07:18.324 Non-Operational Permissive Mode: Not Supported 00:07:18.324 NVM Sets: Not Supported 00:07:18.324 Read Recovery Levels: Not Supported 00:07:18.324 Endurance Groups: Not Supported 00:07:18.324 Predictable Latency Mode: Not Supported 00:07:18.324 Traffic Based Keep ALive: Not Supported 00:07:18.324 Namespace Granularity: Not Supported 00:07:18.324 SQ Associations: Not Supported 00:07:18.324 UUID List: Not Supported 00:07:18.324 Multi-Domain Subsystem: Not Supported 00:07:18.324 Fixed Capacity Management: Not Supported 00:07:18.324 Variable Capacity Management: Not Supported 00:07:18.324 Delete Endurance Group: Not Supported 00:07:18.324 Delete NVM Set: Not Supported 00:07:18.324 Extended LBA Formats Supported: Supported 00:07:18.324 Flexible Data Placement Supported: Not Supported 00:07:18.324 00:07:18.324 Controller Memory Buffer Support 00:07:18.324 ================================ 00:07:18.324 Supported: No 00:07:18.324 00:07:18.324 Persistent Memory Region Support 00:07:18.324 ================================ 00:07:18.324 Supported: No 00:07:18.324 00:07:18.324 Admin Command Set Attributes 00:07:18.324 ============================ 00:07:18.324 Security Send/Receive: Not Supported 00:07:18.324 Format NVM: Supported 00:07:18.324 Firmware Activate/Download: Not Supported 00:07:18.324 Namespace Management: Supported 00:07:18.324 Device Self-Test: Not Supported 00:07:18.324 Directives: Supported 00:07:18.324 NVMe-MI: Not Supported 00:07:18.324 Virtualization Management: Not Supported 00:07:18.324 Doorbell Buffer Config: Supported 00:07:18.324 Get LBA Status Capability: Not Supported 00:07:18.324 Command & Feature Lockdown Capability: Not Supported 00:07:18.324 Abort Command Limit: 4 00:07:18.325 Async Event Request Limit: 4 00:07:18.325 Number of Firmware Slots: N/A 00:07:18.325 Firmware Slot 1 Read-Only: N/A 00:07:18.325 Firmware Activation Without Reset: N/A 00:07:18.325 Multiple Update Detection Support: N/A 00:07:18.325 Firmware Update Granularity: No Information Provided 00:07:18.325 Per-Namespace SMART Log: Yes 00:07:18.325 Asymmetric Namespace Access Log Page: Not Supported 00:07:18.325 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:18.325 Command Effects Log Page: Supported 00:07:18.325 Get Log Page Extended Data: Supported 00:07:18.325 Telemetry Log Pages: Not Supported 00:07:18.325 Persistent Event Log Pages: Not Supported 00:07:18.325 Supported Log Pages Log Page: May Support 00:07:18.325 Commands Supported & Effects Log Page: Not Supported 00:07:18.325 Feature Identifiers & Effects Log Page:May Support 00:07:18.325 NVMe-MI Commands & Effects Log Page: May Support 00:07:18.325 Data Area 4 for Telemetry Log: Not Supported 00:07:18.325 Error Log Page Entries Supported: 1 00:07:18.325 Keep Alive: Not Supported 00:07:18.325 00:07:18.325 NVM Command Set Attributes 00:07:18.325 ========================== 00:07:18.325 Submission Queue Entry Size 00:07:18.325 Max: 64 00:07:18.325 Min: 64 00:07:18.325 Completion Queue Entry Size 00:07:18.325 Max: 16 00:07:18.325 Min: 16 00:07:18.325 Number of Namespaces: 256 00:07:18.325 Compare Command: Supported 00:07:18.325 Write Uncorrectable Command: Not Supported 00:07:18.325 Dataset Management Command: Supported 00:07:18.325 Write Zeroes Command: Supported 00:07:18.325 Set Features Save Field: Supported 00:07:18.325 Reservations: Not Supported 00:07:18.325 Timestamp: Supported 00:07:18.325 Copy: Supported 00:07:18.325 Volatile Write Cache: Present 00:07:18.325 Atomic Write Unit (Normal): 1 00:07:18.325 Atomic Write Unit (PFail): 1 00:07:18.325 Atomic Compare & Write Unit: 1 00:07:18.325 Fused Compare & Write: Not Supported 00:07:18.325 Scatter-Gather List 00:07:18.325 SGL Command Set: Supported 00:07:18.325 SGL Keyed: Not Supported 00:07:18.325 SGL Bit Bucket Descriptor: Not Supported 00:07:18.325 SGL Metadata Pointer: Not Supported 00:07:18.325 Oversized SGL: Not Supported 00:07:18.325 SGL Metadata Address: Not Supported 00:07:18.325 SGL Offset: Not Supported 00:07:18.325 Transport SGL Data Block: Not Supported 00:07:18.325 Replay Protected Memory Block: Not Supported 00:07:18.325 00:07:18.325 Firmware Slot Information 00:07:18.325 ========================= 00:07:18.325 Active slot: 1 00:07:18.325 Slot 1 Firmware Revision: 1.0 00:07:18.325 00:07:18.325 00:07:18.325 Commands Supported and Effects 00:07:18.325 ============================== 00:07:18.325 Admin Commands 00:07:18.325 -------------- 00:07:18.325 Delete I/O Submission Queue (00h): Supported 00:07:18.325 Create I/O Submission Queue (01h): Supported 00:07:18.325 Get Log Page (02h): Supported 00:07:18.325 Delete I/O Completion Queue (04h): Supported 00:07:18.325 Create I/O Completion Queue (05h): Supported 00:07:18.325 Identify (06h): Supported 00:07:18.325 Abort (08h): Supported 00:07:18.325 Set Features (09h): Supported 00:07:18.325 Get Features (0Ah): Supported 00:07:18.325 Asynchronous Event Request (0Ch): Supported 00:07:18.325 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:18.325 Directive Send (19h): Supported 00:07:18.325 Directive Receive (1Ah): Supported 00:07:18.325 Virtualization Management (1Ch): Supported 00:07:18.325 Doorbell Buffer Config (7Ch): Supported 00:07:18.325 Format NVM (80h): Supported LBA-Change 00:07:18.325 I/O Commands 00:07:18.325 ------------ 00:07:18.325 Flush (00h): Supported LBA-Change 00:07:18.325 Write (01h): Supported LBA-Change 00:07:18.325 Read (02h): Supported 00:07:18.325 Compare (05h): Supported 00:07:18.325 Write Zeroes (08h): Supported LBA-Change 00:07:18.325 Dataset Management (09h): Supported LBA-Change 00:07:18.325 Unknown (0Ch): Supported 00:07:18.325 Unknown (12h): Supported 00:07:18.325 Copy (19h): Supported LBA-Change 00:07:18.325 Unknown (1Dh): Supported LBA-Change 00:07:18.325 00:07:18.325 Error Log 00:07:18.325 ========= 00:07:18.325 00:07:18.325 Arbitration 00:07:18.325 =========== 00:07:18.325 Arbitration Burst: no limit 00:07:18.325 00:07:18.325 Power Management 00:07:18.325 ================ 00:07:18.325 Number of Power States: 1 00:07:18.325 Current Power State: Power State #0 00:07:18.325 Power State #0: 00:07:18.325 Max Power: 25.00 W 00:07:18.325 Non-Operational State: Operational 00:07:18.325 Entry Latency: 16 microseconds 00:07:18.325 Exit Latency: 4 microseconds 00:07:18.325 Relative Read Throughput: 0 00:07:18.325 Relative Read Latency: 0 00:07:18.325 Relative Write Throughput: 0 00:07:18.325 Relative Write Latency: 0 00:07:18.325 Idle Power: Not Reported 00:07:18.325 Active Power: Not Reported 00:07:18.325 Non-Operational Permissive Mode: Not Supported 00:07:18.325 00:07:18.325 Health Information 00:07:18.325 ================== 00:07:18.325 Critical Warnings: 00:07:18.325 Available Spare Space: OK 00:07:18.325 Temperature: OK 00:07:18.325 Device Reliability: OK 00:07:18.325 Read Only: No 00:07:18.325 Volatile Memory Backup: OK 00:07:18.325 Current Temperature: 323 Kelvin (50 Celsius) 00:07:18.325 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:18.325 Available Spare: 0% 00:07:18.325 Available Spare Threshold: 0% 00:07:18.325 Life Percentage Used: 0% 00:07:18.325 Data Units Read: 2257 00:07:18.325 Data Units Written: 2045 00:07:18.325 Host Read Commands: 124376 00:07:18.325 Host Write Commands: 122645 00:07:18.325 Controller Busy Time: 0 minutes 00:07:18.325 Power Cycles: 0 00:07:18.325 Power On Hours: 0 hours 00:07:18.325 Unsafe Shutdowns: 0 00:07:18.325 Unrecoverable Media Errors: 0 00:07:18.325 Lifetime Error Log Entries: 0 00:07:18.325 Warning Temperature Time: 0 minutes 00:07:18.325 Critical Temperature Time: 0 minutes 00:07:18.325 00:07:18.325 Number of Queues 00:07:18.325 ================ 00:07:18.325 Number of I/O Submission Queues: 64 00:07:18.325 Number of I/O Completion Queues: 64 00:07:18.325 00:07:18.325 ZNS Specific Controller Data 00:07:18.325 ============================ 00:07:18.325 Zone Append Size Limit: 0 00:07:18.325 00:07:18.325 00:07:18.325 Active Namespaces 00:07:18.325 ================= 00:07:18.325 Namespace ID:1 00:07:18.325 Error Recovery Timeout: Unlimited 00:07:18.325 Command Set Identifier: NVM (00h) 00:07:18.325 Deallocate: Supported 00:07:18.325 Deallocated/Unwritten Error: Supported 00:07:18.325 Deallocated Read Value: All 0x00 00:07:18.325 Deallocate in Write Zeroes: Not Supported 00:07:18.325 Deallocated Guard Field: 0xFFFF 00:07:18.325 Flush: Supported 00:07:18.325 Reservation: Not Supported 00:07:18.325 Namespace Sharing Capabilities: Private 00:07:18.325 Size (in LBAs): 1048576 (4GiB) 00:07:18.325 Capacity (in LBAs): 1048576 (4GiB) 00:07:18.325 Utilization (in LBAs): 1048576 (4GiB) 00:07:18.325 Thin Provisioning: Not Supported 00:07:18.325 Per-NS Atomic Units: No 00:07:18.325 Maximum Single Source Range Length: 128 00:07:18.325 Maximum Copy Length: 128 00:07:18.325 Maximum Source Range Count: 128 00:07:18.325 NGUID/EUI64 Never Reused: No 00:07:18.325 Namespace Write Protected: No 00:07:18.325 Number of LBA Formats: 8 00:07:18.325 Current LBA Format: LBA Format #04 00:07:18.325 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:18.325 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:18.325 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:18.325 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:18.325 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:18.325 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:18.325 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:18.325 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:18.325 00:07:18.325 NVM Specific Namespace Data 00:07:18.325 =========================== 00:07:18.325 Logical Block Storage Tag Mask: 0 00:07:18.325 Protection Information Capabilities: 00:07:18.325 16b Guard Protection Information Storage Tag Support: No 00:07:18.325 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:18.325 Storage Tag Check Read Support: No 00:07:18.325 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.325 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.325 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.325 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.325 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.325 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.325 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.325 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.325 Namespace ID:2 00:07:18.325 Error Recovery Timeout: Unlimited 00:07:18.325 Command Set Identifier: NVM (00h) 00:07:18.325 Deallocate: Supported 00:07:18.326 Deallocated/Unwritten Error: Supported 00:07:18.326 Deallocated Read Value: All 0x00 00:07:18.326 Deallocate in Write Zeroes: Not Supported 00:07:18.326 Deallocated Guard Field: 0xFFFF 00:07:18.326 Flush: Supported 00:07:18.326 Reservation: Not Supported 00:07:18.326 Namespace Sharing Capabilities: Private 00:07:18.326 Size (in LBAs): 1048576 (4GiB) 00:07:18.326 Capacity (in LBAs): 1048576 (4GiB) 00:07:18.326 Utilization (in LBAs): 1048576 (4GiB) 00:07:18.326 Thin Provisioning: Not Supported 00:07:18.326 Per-NS Atomic Units: No 00:07:18.326 Maximum Single Source Range Length: 128 00:07:18.326 Maximum Copy Length: 128 00:07:18.326 Maximum Source Range Count: 128 00:07:18.326 NGUID/EUI64 Never Reused: No 00:07:18.326 Namespace Write Protected: No 00:07:18.326 Number of LBA Formats: 8 00:07:18.326 Current LBA Format: LBA Format #04 00:07:18.326 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:18.326 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:18.326 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:18.326 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:18.326 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:18.326 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:18.326 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:18.326 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:18.326 00:07:18.326 NVM Specific Namespace Data 00:07:18.326 =========================== 00:07:18.326 Logical Block Storage Tag Mask: 0 00:07:18.326 Protection Information Capabilities: 00:07:18.326 16b Guard Protection Information Storage Tag Support: No 00:07:18.326 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:18.326 Storage Tag Check Read Support: No 00:07:18.326 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Namespace ID:3 00:07:18.326 Error Recovery Timeout: Unlimited 00:07:18.326 Command Set Identifier: NVM (00h) 00:07:18.326 Deallocate: Supported 00:07:18.326 Deallocated/Unwritten Error: Supported 00:07:18.326 Deallocated Read Value: All 0x00 00:07:18.326 Deallocate in Write Zeroes: Not Supported 00:07:18.326 Deallocated Guard Field: 0xFFFF 00:07:18.326 Flush: Supported 00:07:18.326 Reservation: Not Supported 00:07:18.326 Namespace Sharing Capabilities: Private 00:07:18.326 Size (in LBAs): 1048576 (4GiB) 00:07:18.326 Capacity (in LBAs): 1048576 (4GiB) 00:07:18.326 Utilization (in LBAs): 1048576 (4GiB) 00:07:18.326 Thin Provisioning: Not Supported 00:07:18.326 Per-NS Atomic Units: No 00:07:18.326 Maximum Single Source Range Length: 128 00:07:18.326 Maximum Copy Length: 128 00:07:18.326 Maximum Source Range Count: 128 00:07:18.326 NGUID/EUI64 Never Reused: No 00:07:18.326 Namespace Write Protected: No 00:07:18.326 Number of LBA Formats: 8 00:07:18.326 Current LBA Format: LBA Format #04 00:07:18.326 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:18.326 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:18.326 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:18.326 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:18.326 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:18.326 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:18.326 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:18.326 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:18.326 00:07:18.326 NVM Specific Namespace Data 00:07:18.326 =========================== 00:07:18.326 Logical Block Storage Tag Mask: 0 00:07:18.326 Protection Information Capabilities: 00:07:18.326 16b Guard Protection Information Storage Tag Support: No 00:07:18.326 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:18.326 Storage Tag Check Read Support: No 00:07:18.326 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.326 09:39:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:18.326 09:39:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:18.588 ===================================================== 00:07:18.588 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:18.588 ===================================================== 00:07:18.588 Controller Capabilities/Features 00:07:18.588 ================================ 00:07:18.588 Vendor ID: 1b36 00:07:18.588 Subsystem Vendor ID: 1af4 00:07:18.588 Serial Number: 12340 00:07:18.588 Model Number: QEMU NVMe Ctrl 00:07:18.588 Firmware Version: 8.0.0 00:07:18.588 Recommended Arb Burst: 6 00:07:18.588 IEEE OUI Identifier: 00 54 52 00:07:18.588 Multi-path I/O 00:07:18.588 May have multiple subsystem ports: No 00:07:18.588 May have multiple controllers: No 00:07:18.588 Associated with SR-IOV VF: No 00:07:18.588 Max Data Transfer Size: 524288 00:07:18.588 Max Number of Namespaces: 256 00:07:18.588 Max Number of I/O Queues: 64 00:07:18.588 NVMe Specification Version (VS): 1.4 00:07:18.588 NVMe Specification Version (Identify): 1.4 00:07:18.588 Maximum Queue Entries: 2048 00:07:18.588 Contiguous Queues Required: Yes 00:07:18.588 Arbitration Mechanisms Supported 00:07:18.588 Weighted Round Robin: Not Supported 00:07:18.588 Vendor Specific: Not Supported 00:07:18.588 Reset Timeout: 7500 ms 00:07:18.588 Doorbell Stride: 4 bytes 00:07:18.588 NVM Subsystem Reset: Not Supported 00:07:18.588 Command Sets Supported 00:07:18.588 NVM Command Set: Supported 00:07:18.588 Boot Partition: Not Supported 00:07:18.588 Memory Page Size Minimum: 4096 bytes 00:07:18.588 Memory Page Size Maximum: 65536 bytes 00:07:18.588 Persistent Memory Region: Not Supported 00:07:18.588 Optional Asynchronous Events Supported 00:07:18.588 Namespace Attribute Notices: Supported 00:07:18.588 Firmware Activation Notices: Not Supported 00:07:18.588 ANA Change Notices: Not Supported 00:07:18.588 PLE Aggregate Log Change Notices: Not Supported 00:07:18.588 LBA Status Info Alert Notices: Not Supported 00:07:18.588 EGE Aggregate Log Change Notices: Not Supported 00:07:18.588 Normal NVM Subsystem Shutdown event: Not Supported 00:07:18.588 Zone Descriptor Change Notices: Not Supported 00:07:18.588 Discovery Log Change Notices: Not Supported 00:07:18.588 Controller Attributes 00:07:18.588 128-bit Host Identifier: Not Supported 00:07:18.588 Non-Operational Permissive Mode: Not Supported 00:07:18.588 NVM Sets: Not Supported 00:07:18.588 Read Recovery Levels: Not Supported 00:07:18.588 Endurance Groups: Not Supported 00:07:18.588 Predictable Latency Mode: Not Supported 00:07:18.588 Traffic Based Keep ALive: Not Supported 00:07:18.588 Namespace Granularity: Not Supported 00:07:18.588 SQ Associations: Not Supported 00:07:18.588 UUID List: Not Supported 00:07:18.588 Multi-Domain Subsystem: Not Supported 00:07:18.588 Fixed Capacity Management: Not Supported 00:07:18.588 Variable Capacity Management: Not Supported 00:07:18.588 Delete Endurance Group: Not Supported 00:07:18.588 Delete NVM Set: Not Supported 00:07:18.588 Extended LBA Formats Supported: Supported 00:07:18.588 Flexible Data Placement Supported: Not Supported 00:07:18.588 00:07:18.588 Controller Memory Buffer Support 00:07:18.588 ================================ 00:07:18.588 Supported: No 00:07:18.588 00:07:18.588 Persistent Memory Region Support 00:07:18.588 ================================ 00:07:18.588 Supported: No 00:07:18.588 00:07:18.588 Admin Command Set Attributes 00:07:18.588 ============================ 00:07:18.588 Security Send/Receive: Not Supported 00:07:18.588 Format NVM: Supported 00:07:18.588 Firmware Activate/Download: Not Supported 00:07:18.588 Namespace Management: Supported 00:07:18.588 Device Self-Test: Not Supported 00:07:18.588 Directives: Supported 00:07:18.588 NVMe-MI: Not Supported 00:07:18.588 Virtualization Management: Not Supported 00:07:18.588 Doorbell Buffer Config: Supported 00:07:18.588 Get LBA Status Capability: Not Supported 00:07:18.588 Command & Feature Lockdown Capability: Not Supported 00:07:18.588 Abort Command Limit: 4 00:07:18.588 Async Event Request Limit: 4 00:07:18.588 Number of Firmware Slots: N/A 00:07:18.588 Firmware Slot 1 Read-Only: N/A 00:07:18.588 Firmware Activation Without Reset: N/A 00:07:18.588 Multiple Update Detection Support: N/A 00:07:18.588 Firmware Update Granularity: No Information Provided 00:07:18.588 Per-Namespace SMART Log: Yes 00:07:18.588 Asymmetric Namespace Access Log Page: Not Supported 00:07:18.588 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:18.588 Command Effects Log Page: Supported 00:07:18.588 Get Log Page Extended Data: Supported 00:07:18.588 Telemetry Log Pages: Not Supported 00:07:18.588 Persistent Event Log Pages: Not Supported 00:07:18.588 Supported Log Pages Log Page: May Support 00:07:18.588 Commands Supported & Effects Log Page: Not Supported 00:07:18.588 Feature Identifiers & Effects Log Page:May Support 00:07:18.588 NVMe-MI Commands & Effects Log Page: May Support 00:07:18.588 Data Area 4 for Telemetry Log: Not Supported 00:07:18.588 Error Log Page Entries Supported: 1 00:07:18.588 Keep Alive: Not Supported 00:07:18.588 00:07:18.588 NVM Command Set Attributes 00:07:18.588 ========================== 00:07:18.588 Submission Queue Entry Size 00:07:18.588 Max: 64 00:07:18.588 Min: 64 00:07:18.588 Completion Queue Entry Size 00:07:18.588 Max: 16 00:07:18.588 Min: 16 00:07:18.588 Number of Namespaces: 256 00:07:18.588 Compare Command: Supported 00:07:18.588 Write Uncorrectable Command: Not Supported 00:07:18.588 Dataset Management Command: Supported 00:07:18.588 Write Zeroes Command: Supported 00:07:18.588 Set Features Save Field: Supported 00:07:18.588 Reservations: Not Supported 00:07:18.588 Timestamp: Supported 00:07:18.588 Copy: Supported 00:07:18.588 Volatile Write Cache: Present 00:07:18.588 Atomic Write Unit (Normal): 1 00:07:18.588 Atomic Write Unit (PFail): 1 00:07:18.588 Atomic Compare & Write Unit: 1 00:07:18.588 Fused Compare & Write: Not Supported 00:07:18.588 Scatter-Gather List 00:07:18.588 SGL Command Set: Supported 00:07:18.588 SGL Keyed: Not Supported 00:07:18.588 SGL Bit Bucket Descriptor: Not Supported 00:07:18.588 SGL Metadata Pointer: Not Supported 00:07:18.588 Oversized SGL: Not Supported 00:07:18.588 SGL Metadata Address: Not Supported 00:07:18.588 SGL Offset: Not Supported 00:07:18.588 Transport SGL Data Block: Not Supported 00:07:18.588 Replay Protected Memory Block: Not Supported 00:07:18.588 00:07:18.588 Firmware Slot Information 00:07:18.588 ========================= 00:07:18.588 Active slot: 1 00:07:18.588 Slot 1 Firmware Revision: 1.0 00:07:18.588 00:07:18.588 00:07:18.588 Commands Supported and Effects 00:07:18.588 ============================== 00:07:18.588 Admin Commands 00:07:18.588 -------------- 00:07:18.588 Delete I/O Submission Queue (00h): Supported 00:07:18.588 Create I/O Submission Queue (01h): Supported 00:07:18.588 Get Log Page (02h): Supported 00:07:18.588 Delete I/O Completion Queue (04h): Supported 00:07:18.588 Create I/O Completion Queue (05h): Supported 00:07:18.588 Identify (06h): Supported 00:07:18.588 Abort (08h): Supported 00:07:18.588 Set Features (09h): Supported 00:07:18.588 Get Features (0Ah): Supported 00:07:18.588 Asynchronous Event Request (0Ch): Supported 00:07:18.588 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:18.588 Directive Send (19h): Supported 00:07:18.588 Directive Receive (1Ah): Supported 00:07:18.588 Virtualization Management (1Ch): Supported 00:07:18.588 Doorbell Buffer Config (7Ch): Supported 00:07:18.588 Format NVM (80h): Supported LBA-Change 00:07:18.588 I/O Commands 00:07:18.588 ------------ 00:07:18.588 Flush (00h): Supported LBA-Change 00:07:18.588 Write (01h): Supported LBA-Change 00:07:18.588 Read (02h): Supported 00:07:18.588 Compare (05h): Supported 00:07:18.588 Write Zeroes (08h): Supported LBA-Change 00:07:18.588 Dataset Management (09h): Supported LBA-Change 00:07:18.588 Unknown (0Ch): Supported 00:07:18.588 Unknown (12h): Supported 00:07:18.588 Copy (19h): Supported LBA-Change 00:07:18.588 Unknown (1Dh): Supported LBA-Change 00:07:18.588 00:07:18.588 Error Log 00:07:18.588 ========= 00:07:18.588 00:07:18.588 Arbitration 00:07:18.588 =========== 00:07:18.588 Arbitration Burst: no limit 00:07:18.588 00:07:18.588 Power Management 00:07:18.588 ================ 00:07:18.588 Number of Power States: 1 00:07:18.588 Current Power State: Power State #0 00:07:18.588 Power State #0: 00:07:18.588 Max Power: 25.00 W 00:07:18.588 Non-Operational State: Operational 00:07:18.588 Entry Latency: 16 microseconds 00:07:18.588 Exit Latency: 4 microseconds 00:07:18.589 Relative Read Throughput: 0 00:07:18.589 Relative Read Latency: 0 00:07:18.589 Relative Write Throughput: 0 00:07:18.589 Relative Write Latency: 0 00:07:18.589 Idle Power: Not Reported 00:07:18.589 Active Power: Not Reported 00:07:18.589 Non-Operational Permissive Mode: Not Supported 00:07:18.589 00:07:18.589 Health Information 00:07:18.589 ================== 00:07:18.589 Critical Warnings: 00:07:18.589 Available Spare Space: OK 00:07:18.589 Temperature: OK 00:07:18.589 Device Reliability: OK 00:07:18.589 Read Only: No 00:07:18.589 Volatile Memory Backup: OK 00:07:18.589 Current Temperature: 323 Kelvin (50 Celsius) 00:07:18.589 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:18.589 Available Spare: 0% 00:07:18.589 Available Spare Threshold: 0% 00:07:18.589 Life Percentage Used: 0% 00:07:18.589 Data Units Read: 717 00:07:18.589 Data Units Written: 645 00:07:18.589 Host Read Commands: 40783 00:07:18.589 Host Write Commands: 40569 00:07:18.589 Controller Busy Time: 0 minutes 00:07:18.589 Power Cycles: 0 00:07:18.589 Power On Hours: 0 hours 00:07:18.589 Unsafe Shutdowns: 0 00:07:18.589 Unrecoverable Media Errors: 0 00:07:18.589 Lifetime Error Log Entries: 0 00:07:18.589 Warning Temperature Time: 0 minutes 00:07:18.589 Critical Temperature Time: 0 minutes 00:07:18.589 00:07:18.589 Number of Queues 00:07:18.589 ================ 00:07:18.589 Number of I/O Submission Queues: 64 00:07:18.589 Number of I/O Completion Queues: 64 00:07:18.589 00:07:18.589 ZNS Specific Controller Data 00:07:18.589 ============================ 00:07:18.589 Zone Append Size Limit: 0 00:07:18.589 00:07:18.589 00:07:18.589 Active Namespaces 00:07:18.589 ================= 00:07:18.589 Namespace ID:1 00:07:18.589 Error Recovery Timeout: Unlimited 00:07:18.589 Command Set Identifier: NVM (00h) 00:07:18.589 Deallocate: Supported 00:07:18.589 Deallocated/Unwritten Error: Supported 00:07:18.589 Deallocated Read Value: All 0x00 00:07:18.589 Deallocate in Write Zeroes: Not Supported 00:07:18.589 Deallocated Guard Field: 0xFFFF 00:07:18.589 Flush: Supported 00:07:18.589 Reservation: Not Supported 00:07:18.589 Metadata Transferred as: Separate Metadata Buffer 00:07:18.589 Namespace Sharing Capabilities: Private 00:07:18.589 Size (in LBAs): 1548666 (5GiB) 00:07:18.589 Capacity (in LBAs): 1548666 (5GiB) 00:07:18.589 Utilization (in LBAs): 1548666 (5GiB) 00:07:18.589 Thin Provisioning: Not Supported 00:07:18.589 Per-NS Atomic Units: No 00:07:18.589 Maximum Single Source Range Length: 128 00:07:18.589 Maximum Copy Length: 128 00:07:18.589 Maximum Source Range Count: 128 00:07:18.589 NGUID/EUI64 Never Reused: No 00:07:18.589 Namespace Write Protected: No 00:07:18.589 Number of LBA Formats: 8 00:07:18.589 Current LBA Format: LBA Format #07 00:07:18.589 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:18.589 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:18.589 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:18.589 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:18.589 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:18.589 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:18.589 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:18.589 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:18.589 00:07:18.589 NVM Specific Namespace Data 00:07:18.589 =========================== 00:07:18.589 Logical Block Storage Tag Mask: 0 00:07:18.589 Protection Information Capabilities: 00:07:18.589 16b Guard Protection Information Storage Tag Support: No 00:07:18.589 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:18.589 Storage Tag Check Read Support: No 00:07:18.589 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.589 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.589 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.589 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.589 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.589 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.589 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.589 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.589 09:39:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:18.589 09:39:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:18.851 ===================================================== 00:07:18.851 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:18.851 ===================================================== 00:07:18.851 Controller Capabilities/Features 00:07:18.851 ================================ 00:07:18.851 Vendor ID: 1b36 00:07:18.851 Subsystem Vendor ID: 1af4 00:07:18.851 Serial Number: 12341 00:07:18.851 Model Number: QEMU NVMe Ctrl 00:07:18.851 Firmware Version: 8.0.0 00:07:18.851 Recommended Arb Burst: 6 00:07:18.851 IEEE OUI Identifier: 00 54 52 00:07:18.851 Multi-path I/O 00:07:18.851 May have multiple subsystem ports: No 00:07:18.851 May have multiple controllers: No 00:07:18.851 Associated with SR-IOV VF: No 00:07:18.851 Max Data Transfer Size: 524288 00:07:18.851 Max Number of Namespaces: 256 00:07:18.851 Max Number of I/O Queues: 64 00:07:18.851 NVMe Specification Version (VS): 1.4 00:07:18.851 NVMe Specification Version (Identify): 1.4 00:07:18.851 Maximum Queue Entries: 2048 00:07:18.851 Contiguous Queues Required: Yes 00:07:18.851 Arbitration Mechanisms Supported 00:07:18.851 Weighted Round Robin: Not Supported 00:07:18.851 Vendor Specific: Not Supported 00:07:18.851 Reset Timeout: 7500 ms 00:07:18.851 Doorbell Stride: 4 bytes 00:07:18.851 NVM Subsystem Reset: Not Supported 00:07:18.851 Command Sets Supported 00:07:18.851 NVM Command Set: Supported 00:07:18.851 Boot Partition: Not Supported 00:07:18.851 Memory Page Size Minimum: 4096 bytes 00:07:18.851 Memory Page Size Maximum: 65536 bytes 00:07:18.851 Persistent Memory Region: Not Supported 00:07:18.851 Optional Asynchronous Events Supported 00:07:18.851 Namespace Attribute Notices: Supported 00:07:18.851 Firmware Activation Notices: Not Supported 00:07:18.851 ANA Change Notices: Not Supported 00:07:18.851 PLE Aggregate Log Change Notices: Not Supported 00:07:18.851 LBA Status Info Alert Notices: Not Supported 00:07:18.851 EGE Aggregate Log Change Notices: Not Supported 00:07:18.851 Normal NVM Subsystem Shutdown event: Not Supported 00:07:18.851 Zone Descriptor Change Notices: Not Supported 00:07:18.851 Discovery Log Change Notices: Not Supported 00:07:18.851 Controller Attributes 00:07:18.851 128-bit Host Identifier: Not Supported 00:07:18.851 Non-Operational Permissive Mode: Not Supported 00:07:18.851 NVM Sets: Not Supported 00:07:18.851 Read Recovery Levels: Not Supported 00:07:18.851 Endurance Groups: Not Supported 00:07:18.851 Predictable Latency Mode: Not Supported 00:07:18.851 Traffic Based Keep ALive: Not Supported 00:07:18.851 Namespace Granularity: Not Supported 00:07:18.851 SQ Associations: Not Supported 00:07:18.851 UUID List: Not Supported 00:07:18.851 Multi-Domain Subsystem: Not Supported 00:07:18.851 Fixed Capacity Management: Not Supported 00:07:18.851 Variable Capacity Management: Not Supported 00:07:18.851 Delete Endurance Group: Not Supported 00:07:18.851 Delete NVM Set: Not Supported 00:07:18.851 Extended LBA Formats Supported: Supported 00:07:18.851 Flexible Data Placement Supported: Not Supported 00:07:18.851 00:07:18.851 Controller Memory Buffer Support 00:07:18.851 ================================ 00:07:18.851 Supported: No 00:07:18.851 00:07:18.851 Persistent Memory Region Support 00:07:18.851 ================================ 00:07:18.851 Supported: No 00:07:18.851 00:07:18.851 Admin Command Set Attributes 00:07:18.851 ============================ 00:07:18.851 Security Send/Receive: Not Supported 00:07:18.851 Format NVM: Supported 00:07:18.851 Firmware Activate/Download: Not Supported 00:07:18.851 Namespace Management: Supported 00:07:18.851 Device Self-Test: Not Supported 00:07:18.851 Directives: Supported 00:07:18.851 NVMe-MI: Not Supported 00:07:18.851 Virtualization Management: Not Supported 00:07:18.851 Doorbell Buffer Config: Supported 00:07:18.851 Get LBA Status Capability: Not Supported 00:07:18.851 Command & Feature Lockdown Capability: Not Supported 00:07:18.851 Abort Command Limit: 4 00:07:18.851 Async Event Request Limit: 4 00:07:18.851 Number of Firmware Slots: N/A 00:07:18.851 Firmware Slot 1 Read-Only: N/A 00:07:18.851 Firmware Activation Without Reset: N/A 00:07:18.851 Multiple Update Detection Support: N/A 00:07:18.851 Firmware Update Granularity: No Information Provided 00:07:18.851 Per-Namespace SMART Log: Yes 00:07:18.851 Asymmetric Namespace Access Log Page: Not Supported 00:07:18.851 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:18.851 Command Effects Log Page: Supported 00:07:18.851 Get Log Page Extended Data: Supported 00:07:18.851 Telemetry Log Pages: Not Supported 00:07:18.851 Persistent Event Log Pages: Not Supported 00:07:18.851 Supported Log Pages Log Page: May Support 00:07:18.851 Commands Supported & Effects Log Page: Not Supported 00:07:18.851 Feature Identifiers & Effects Log Page:May Support 00:07:18.851 NVMe-MI Commands & Effects Log Page: May Support 00:07:18.851 Data Area 4 for Telemetry Log: Not Supported 00:07:18.851 Error Log Page Entries Supported: 1 00:07:18.851 Keep Alive: Not Supported 00:07:18.851 00:07:18.851 NVM Command Set Attributes 00:07:18.851 ========================== 00:07:18.851 Submission Queue Entry Size 00:07:18.851 Max: 64 00:07:18.851 Min: 64 00:07:18.851 Completion Queue Entry Size 00:07:18.852 Max: 16 00:07:18.852 Min: 16 00:07:18.852 Number of Namespaces: 256 00:07:18.852 Compare Command: Supported 00:07:18.852 Write Uncorrectable Command: Not Supported 00:07:18.852 Dataset Management Command: Supported 00:07:18.852 Write Zeroes Command: Supported 00:07:18.852 Set Features Save Field: Supported 00:07:18.852 Reservations: Not Supported 00:07:18.852 Timestamp: Supported 00:07:18.852 Copy: Supported 00:07:18.852 Volatile Write Cache: Present 00:07:18.852 Atomic Write Unit (Normal): 1 00:07:18.852 Atomic Write Unit (PFail): 1 00:07:18.852 Atomic Compare & Write Unit: 1 00:07:18.852 Fused Compare & Write: Not Supported 00:07:18.852 Scatter-Gather List 00:07:18.852 SGL Command Set: Supported 00:07:18.852 SGL Keyed: Not Supported 00:07:18.852 SGL Bit Bucket Descriptor: Not Supported 00:07:18.852 SGL Metadata Pointer: Not Supported 00:07:18.852 Oversized SGL: Not Supported 00:07:18.852 SGL Metadata Address: Not Supported 00:07:18.852 SGL Offset: Not Supported 00:07:18.852 Transport SGL Data Block: Not Supported 00:07:18.852 Replay Protected Memory Block: Not Supported 00:07:18.852 00:07:18.852 Firmware Slot Information 00:07:18.852 ========================= 00:07:18.852 Active slot: 1 00:07:18.852 Slot 1 Firmware Revision: 1.0 00:07:18.852 00:07:18.852 00:07:18.852 Commands Supported and Effects 00:07:18.852 ============================== 00:07:18.852 Admin Commands 00:07:18.852 -------------- 00:07:18.852 Delete I/O Submission Queue (00h): Supported 00:07:18.852 Create I/O Submission Queue (01h): Supported 00:07:18.852 Get Log Page (02h): Supported 00:07:18.852 Delete I/O Completion Queue (04h): Supported 00:07:18.852 Create I/O Completion Queue (05h): Supported 00:07:18.852 Identify (06h): Supported 00:07:18.852 Abort (08h): Supported 00:07:18.852 Set Features (09h): Supported 00:07:18.852 Get Features (0Ah): Supported 00:07:18.852 Asynchronous Event Request (0Ch): Supported 00:07:18.852 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:18.852 Directive Send (19h): Supported 00:07:18.852 Directive Receive (1Ah): Supported 00:07:18.852 Virtualization Management (1Ch): Supported 00:07:18.852 Doorbell Buffer Config (7Ch): Supported 00:07:18.852 Format NVM (80h): Supported LBA-Change 00:07:18.852 I/O Commands 00:07:18.852 ------------ 00:07:18.852 Flush (00h): Supported LBA-Change 00:07:18.852 Write (01h): Supported LBA-Change 00:07:18.852 Read (02h): Supported 00:07:18.852 Compare (05h): Supported 00:07:18.852 Write Zeroes (08h): Supported LBA-Change 00:07:18.852 Dataset Management (09h): Supported LBA-Change 00:07:18.852 Unknown (0Ch): Supported 00:07:18.852 Unknown (12h): Supported 00:07:18.852 Copy (19h): Supported LBA-Change 00:07:18.852 Unknown (1Dh): Supported LBA-Change 00:07:18.852 00:07:18.852 Error Log 00:07:18.852 ========= 00:07:18.852 00:07:18.852 Arbitration 00:07:18.852 =========== 00:07:18.852 Arbitration Burst: no limit 00:07:18.852 00:07:18.852 Power Management 00:07:18.852 ================ 00:07:18.852 Number of Power States: 1 00:07:18.852 Current Power State: Power State #0 00:07:18.852 Power State #0: 00:07:18.852 Max Power: 25.00 W 00:07:18.852 Non-Operational State: Operational 00:07:18.852 Entry Latency: 16 microseconds 00:07:18.852 Exit Latency: 4 microseconds 00:07:18.852 Relative Read Throughput: 0 00:07:18.852 Relative Read Latency: 0 00:07:18.852 Relative Write Throughput: 0 00:07:18.852 Relative Write Latency: 0 00:07:18.852 Idle Power: Not Reported 00:07:18.852 Active Power: Not Reported 00:07:18.852 Non-Operational Permissive Mode: Not Supported 00:07:18.852 00:07:18.852 Health Information 00:07:18.852 ================== 00:07:18.852 Critical Warnings: 00:07:18.852 Available Spare Space: OK 00:07:18.852 Temperature: OK 00:07:18.852 Device Reliability: OK 00:07:18.852 Read Only: No 00:07:18.852 Volatile Memory Backup: OK 00:07:18.852 Current Temperature: 323 Kelvin (50 Celsius) 00:07:18.852 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:18.852 Available Spare: 0% 00:07:18.852 Available Spare Threshold: 0% 00:07:18.852 Life Percentage Used: 0% 00:07:18.852 Data Units Read: 1099 00:07:18.852 Data Units Written: 960 00:07:18.852 Host Read Commands: 59625 00:07:18.852 Host Write Commands: 58329 00:07:18.852 Controller Busy Time: 0 minutes 00:07:18.852 Power Cycles: 0 00:07:18.852 Power On Hours: 0 hours 00:07:18.852 Unsafe Shutdowns: 0 00:07:18.852 Unrecoverable Media Errors: 0 00:07:18.852 Lifetime Error Log Entries: 0 00:07:18.852 Warning Temperature Time: 0 minutes 00:07:18.852 Critical Temperature Time: 0 minutes 00:07:18.852 00:07:18.852 Number of Queues 00:07:18.852 ================ 00:07:18.852 Number of I/O Submission Queues: 64 00:07:18.852 Number of I/O Completion Queues: 64 00:07:18.852 00:07:18.852 ZNS Specific Controller Data 00:07:18.852 ============================ 00:07:18.852 Zone Append Size Limit: 0 00:07:18.852 00:07:18.852 00:07:18.852 Active Namespaces 00:07:18.852 ================= 00:07:18.852 Namespace ID:1 00:07:18.852 Error Recovery Timeout: Unlimited 00:07:18.852 Command Set Identifier: NVM (00h) 00:07:18.852 Deallocate: Supported 00:07:18.852 Deallocated/Unwritten Error: Supported 00:07:18.852 Deallocated Read Value: All 0x00 00:07:18.852 Deallocate in Write Zeroes: Not Supported 00:07:18.852 Deallocated Guard Field: 0xFFFF 00:07:18.852 Flush: Supported 00:07:18.852 Reservation: Not Supported 00:07:18.852 Namespace Sharing Capabilities: Private 00:07:18.852 Size (in LBAs): 1310720 (5GiB) 00:07:18.852 Capacity (in LBAs): 1310720 (5GiB) 00:07:18.852 Utilization (in LBAs): 1310720 (5GiB) 00:07:18.852 Thin Provisioning: Not Supported 00:07:18.852 Per-NS Atomic Units: No 00:07:18.852 Maximum Single Source Range Length: 128 00:07:18.852 Maximum Copy Length: 128 00:07:18.852 Maximum Source Range Count: 128 00:07:18.852 NGUID/EUI64 Never Reused: No 00:07:18.852 Namespace Write Protected: No 00:07:18.852 Number of LBA Formats: 8 00:07:18.852 Current LBA Format: LBA Format #04 00:07:18.852 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:18.852 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:18.852 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:18.852 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:18.852 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:18.852 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:18.852 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:18.852 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:18.852 00:07:18.852 NVM Specific Namespace Data 00:07:18.852 =========================== 00:07:18.852 Logical Block Storage Tag Mask: 0 00:07:18.852 Protection Information Capabilities: 00:07:18.852 16b Guard Protection Information Storage Tag Support: No 00:07:18.852 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:18.852 Storage Tag Check Read Support: No 00:07:18.852 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.852 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.852 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.852 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.852 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.852 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.852 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.852 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:18.852 09:39:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:18.852 09:39:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:19.114 ===================================================== 00:07:19.114 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:19.114 ===================================================== 00:07:19.114 Controller Capabilities/Features 00:07:19.114 ================================ 00:07:19.114 Vendor ID: 1b36 00:07:19.114 Subsystem Vendor ID: 1af4 00:07:19.114 Serial Number: 12342 00:07:19.114 Model Number: QEMU NVMe Ctrl 00:07:19.114 Firmware Version: 8.0.0 00:07:19.114 Recommended Arb Burst: 6 00:07:19.114 IEEE OUI Identifier: 00 54 52 00:07:19.114 Multi-path I/O 00:07:19.114 May have multiple subsystem ports: No 00:07:19.114 May have multiple controllers: No 00:07:19.114 Associated with SR-IOV VF: No 00:07:19.114 Max Data Transfer Size: 524288 00:07:19.114 Max Number of Namespaces: 256 00:07:19.114 Max Number of I/O Queues: 64 00:07:19.114 NVMe Specification Version (VS): 1.4 00:07:19.114 NVMe Specification Version (Identify): 1.4 00:07:19.114 Maximum Queue Entries: 2048 00:07:19.114 Contiguous Queues Required: Yes 00:07:19.114 Arbitration Mechanisms Supported 00:07:19.114 Weighted Round Robin: Not Supported 00:07:19.114 Vendor Specific: Not Supported 00:07:19.114 Reset Timeout: 7500 ms 00:07:19.114 Doorbell Stride: 4 bytes 00:07:19.114 NVM Subsystem Reset: Not Supported 00:07:19.114 Command Sets Supported 00:07:19.114 NVM Command Set: Supported 00:07:19.114 Boot Partition: Not Supported 00:07:19.114 Memory Page Size Minimum: 4096 bytes 00:07:19.114 Memory Page Size Maximum: 65536 bytes 00:07:19.114 Persistent Memory Region: Not Supported 00:07:19.114 Optional Asynchronous Events Supported 00:07:19.114 Namespace Attribute Notices: Supported 00:07:19.114 Firmware Activation Notices: Not Supported 00:07:19.114 ANA Change Notices: Not Supported 00:07:19.114 PLE Aggregate Log Change Notices: Not Supported 00:07:19.114 LBA Status Info Alert Notices: Not Supported 00:07:19.114 EGE Aggregate Log Change Notices: Not Supported 00:07:19.114 Normal NVM Subsystem Shutdown event: Not Supported 00:07:19.114 Zone Descriptor Change Notices: Not Supported 00:07:19.114 Discovery Log Change Notices: Not Supported 00:07:19.114 Controller Attributes 00:07:19.114 128-bit Host Identifier: Not Supported 00:07:19.114 Non-Operational Permissive Mode: Not Supported 00:07:19.114 NVM Sets: Not Supported 00:07:19.114 Read Recovery Levels: Not Supported 00:07:19.114 Endurance Groups: Not Supported 00:07:19.114 Predictable Latency Mode: Not Supported 00:07:19.114 Traffic Based Keep ALive: Not Supported 00:07:19.114 Namespace Granularity: Not Supported 00:07:19.114 SQ Associations: Not Supported 00:07:19.114 UUID List: Not Supported 00:07:19.114 Multi-Domain Subsystem: Not Supported 00:07:19.114 Fixed Capacity Management: Not Supported 00:07:19.114 Variable Capacity Management: Not Supported 00:07:19.114 Delete Endurance Group: Not Supported 00:07:19.114 Delete NVM Set: Not Supported 00:07:19.114 Extended LBA Formats Supported: Supported 00:07:19.114 Flexible Data Placement Supported: Not Supported 00:07:19.114 00:07:19.114 Controller Memory Buffer Support 00:07:19.114 ================================ 00:07:19.114 Supported: No 00:07:19.114 00:07:19.114 Persistent Memory Region Support 00:07:19.114 ================================ 00:07:19.114 Supported: No 00:07:19.114 00:07:19.114 Admin Command Set Attributes 00:07:19.114 ============================ 00:07:19.114 Security Send/Receive: Not Supported 00:07:19.114 Format NVM: Supported 00:07:19.114 Firmware Activate/Download: Not Supported 00:07:19.114 Namespace Management: Supported 00:07:19.114 Device Self-Test: Not Supported 00:07:19.114 Directives: Supported 00:07:19.114 NVMe-MI: Not Supported 00:07:19.114 Virtualization Management: Not Supported 00:07:19.114 Doorbell Buffer Config: Supported 00:07:19.114 Get LBA Status Capability: Not Supported 00:07:19.114 Command & Feature Lockdown Capability: Not Supported 00:07:19.114 Abort Command Limit: 4 00:07:19.114 Async Event Request Limit: 4 00:07:19.114 Number of Firmware Slots: N/A 00:07:19.114 Firmware Slot 1 Read-Only: N/A 00:07:19.114 Firmware Activation Without Reset: N/A 00:07:19.114 Multiple Update Detection Support: N/A 00:07:19.114 Firmware Update Granularity: No Information Provided 00:07:19.114 Per-Namespace SMART Log: Yes 00:07:19.114 Asymmetric Namespace Access Log Page: Not Supported 00:07:19.114 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:19.114 Command Effects Log Page: Supported 00:07:19.114 Get Log Page Extended Data: Supported 00:07:19.114 Telemetry Log Pages: Not Supported 00:07:19.114 Persistent Event Log Pages: Not Supported 00:07:19.114 Supported Log Pages Log Page: May Support 00:07:19.114 Commands Supported & Effects Log Page: Not Supported 00:07:19.114 Feature Identifiers & Effects Log Page:May Support 00:07:19.114 NVMe-MI Commands & Effects Log Page: May Support 00:07:19.114 Data Area 4 for Telemetry Log: Not Supported 00:07:19.114 Error Log Page Entries Supported: 1 00:07:19.114 Keep Alive: Not Supported 00:07:19.114 00:07:19.114 NVM Command Set Attributes 00:07:19.114 ========================== 00:07:19.114 Submission Queue Entry Size 00:07:19.114 Max: 64 00:07:19.114 Min: 64 00:07:19.114 Completion Queue Entry Size 00:07:19.114 Max: 16 00:07:19.114 Min: 16 00:07:19.114 Number of Namespaces: 256 00:07:19.114 Compare Command: Supported 00:07:19.114 Write Uncorrectable Command: Not Supported 00:07:19.114 Dataset Management Command: Supported 00:07:19.114 Write Zeroes Command: Supported 00:07:19.114 Set Features Save Field: Supported 00:07:19.114 Reservations: Not Supported 00:07:19.114 Timestamp: Supported 00:07:19.114 Copy: Supported 00:07:19.114 Volatile Write Cache: Present 00:07:19.114 Atomic Write Unit (Normal): 1 00:07:19.114 Atomic Write Unit (PFail): 1 00:07:19.114 Atomic Compare & Write Unit: 1 00:07:19.114 Fused Compare & Write: Not Supported 00:07:19.114 Scatter-Gather List 00:07:19.114 SGL Command Set: Supported 00:07:19.114 SGL Keyed: Not Supported 00:07:19.114 SGL Bit Bucket Descriptor: Not Supported 00:07:19.114 SGL Metadata Pointer: Not Supported 00:07:19.114 Oversized SGL: Not Supported 00:07:19.114 SGL Metadata Address: Not Supported 00:07:19.114 SGL Offset: Not Supported 00:07:19.114 Transport SGL Data Block: Not Supported 00:07:19.114 Replay Protected Memory Block: Not Supported 00:07:19.114 00:07:19.114 Firmware Slot Information 00:07:19.114 ========================= 00:07:19.114 Active slot: 1 00:07:19.114 Slot 1 Firmware Revision: 1.0 00:07:19.114 00:07:19.114 00:07:19.114 Commands Supported and Effects 00:07:19.114 ============================== 00:07:19.114 Admin Commands 00:07:19.115 -------------- 00:07:19.115 Delete I/O Submission Queue (00h): Supported 00:07:19.115 Create I/O Submission Queue (01h): Supported 00:07:19.115 Get Log Page (02h): Supported 00:07:19.115 Delete I/O Completion Queue (04h): Supported 00:07:19.115 Create I/O Completion Queue (05h): Supported 00:07:19.115 Identify (06h): Supported 00:07:19.115 Abort (08h): Supported 00:07:19.115 Set Features (09h): Supported 00:07:19.115 Get Features (0Ah): Supported 00:07:19.115 Asynchronous Event Request (0Ch): Supported 00:07:19.115 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:19.115 Directive Send (19h): Supported 00:07:19.115 Directive Receive (1Ah): Supported 00:07:19.115 Virtualization Management (1Ch): Supported 00:07:19.115 Doorbell Buffer Config (7Ch): Supported 00:07:19.115 Format NVM (80h): Supported LBA-Change 00:07:19.115 I/O Commands 00:07:19.115 ------------ 00:07:19.115 Flush (00h): Supported LBA-Change 00:07:19.115 Write (01h): Supported LBA-Change 00:07:19.115 Read (02h): Supported 00:07:19.115 Compare (05h): Supported 00:07:19.115 Write Zeroes (08h): Supported LBA-Change 00:07:19.115 Dataset Management (09h): Supported LBA-Change 00:07:19.115 Unknown (0Ch): Supported 00:07:19.115 Unknown (12h): Supported 00:07:19.115 Copy (19h): Supported LBA-Change 00:07:19.115 Unknown (1Dh): Supported LBA-Change 00:07:19.115 00:07:19.115 Error Log 00:07:19.115 ========= 00:07:19.115 00:07:19.115 Arbitration 00:07:19.115 =========== 00:07:19.115 Arbitration Burst: no limit 00:07:19.115 00:07:19.115 Power Management 00:07:19.115 ================ 00:07:19.115 Number of Power States: 1 00:07:19.115 Current Power State: Power State #0 00:07:19.115 Power State #0: 00:07:19.115 Max Power: 25.00 W 00:07:19.115 Non-Operational State: Operational 00:07:19.115 Entry Latency: 16 microseconds 00:07:19.115 Exit Latency: 4 microseconds 00:07:19.115 Relative Read Throughput: 0 00:07:19.115 Relative Read Latency: 0 00:07:19.115 Relative Write Throughput: 0 00:07:19.115 Relative Write Latency: 0 00:07:19.115 Idle Power: Not Reported 00:07:19.115 Active Power: Not Reported 00:07:19.115 Non-Operational Permissive Mode: Not Supported 00:07:19.115 00:07:19.115 Health Information 00:07:19.115 ================== 00:07:19.115 Critical Warnings: 00:07:19.115 Available Spare Space: OK 00:07:19.115 Temperature: OK 00:07:19.115 Device Reliability: OK 00:07:19.115 Read Only: No 00:07:19.115 Volatile Memory Backup: OK 00:07:19.115 Current Temperature: 323 Kelvin (50 Celsius) 00:07:19.115 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:19.115 Available Spare: 0% 00:07:19.115 Available Spare Threshold: 0% 00:07:19.115 Life Percentage Used: 0% 00:07:19.115 Data Units Read: 2257 00:07:19.115 Data Units Written: 2045 00:07:19.115 Host Read Commands: 124376 00:07:19.115 Host Write Commands: 122645 00:07:19.115 Controller Busy Time: 0 minutes 00:07:19.115 Power Cycles: 0 00:07:19.115 Power On Hours: 0 hours 00:07:19.115 Unsafe Shutdowns: 0 00:07:19.115 Unrecoverable Media Errors: 0 00:07:19.115 Lifetime Error Log Entries: 0 00:07:19.115 Warning Temperature Time: 0 minutes 00:07:19.115 Critical Temperature Time: 0 minutes 00:07:19.115 00:07:19.115 Number of Queues 00:07:19.115 ================ 00:07:19.115 Number of I/O Submission Queues: 64 00:07:19.115 Number of I/O Completion Queues: 64 00:07:19.115 00:07:19.115 ZNS Specific Controller Data 00:07:19.115 ============================ 00:07:19.115 Zone Append Size Limit: 0 00:07:19.115 00:07:19.115 00:07:19.115 Active Namespaces 00:07:19.115 ================= 00:07:19.115 Namespace ID:1 00:07:19.115 Error Recovery Timeout: Unlimited 00:07:19.115 Command Set Identifier: NVM (00h) 00:07:19.115 Deallocate: Supported 00:07:19.115 Deallocated/Unwritten Error: Supported 00:07:19.115 Deallocated Read Value: All 0x00 00:07:19.115 Deallocate in Write Zeroes: Not Supported 00:07:19.115 Deallocated Guard Field: 0xFFFF 00:07:19.115 Flush: Supported 00:07:19.115 Reservation: Not Supported 00:07:19.115 Namespace Sharing Capabilities: Private 00:07:19.115 Size (in LBAs): 1048576 (4GiB) 00:07:19.115 Capacity (in LBAs): 1048576 (4GiB) 00:07:19.115 Utilization (in LBAs): 1048576 (4GiB) 00:07:19.115 Thin Provisioning: Not Supported 00:07:19.115 Per-NS Atomic Units: No 00:07:19.115 Maximum Single Source Range Length: 128 00:07:19.115 Maximum Copy Length: 128 00:07:19.115 Maximum Source Range Count: 128 00:07:19.115 NGUID/EUI64 Never Reused: No 00:07:19.115 Namespace Write Protected: No 00:07:19.115 Number of LBA Formats: 8 00:07:19.115 Current LBA Format: LBA Format #04 00:07:19.115 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.115 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.115 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.115 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.115 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.115 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.115 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.115 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.115 00:07:19.115 NVM Specific Namespace Data 00:07:19.115 =========================== 00:07:19.115 Logical Block Storage Tag Mask: 0 00:07:19.115 Protection Information Capabilities: 00:07:19.115 16b Guard Protection Information Storage Tag Support: No 00:07:19.115 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.115 Storage Tag Check Read Support: No 00:07:19.115 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.115 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.115 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.115 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.115 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.115 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.115 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.115 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.115 Namespace ID:2 00:07:19.115 Error Recovery Timeout: Unlimited 00:07:19.115 Command Set Identifier: NVM (00h) 00:07:19.115 Deallocate: Supported 00:07:19.115 Deallocated/Unwritten Error: Supported 00:07:19.115 Deallocated Read Value: All 0x00 00:07:19.115 Deallocate in Write Zeroes: Not Supported 00:07:19.115 Deallocated Guard Field: 0xFFFF 00:07:19.115 Flush: Supported 00:07:19.115 Reservation: Not Supported 00:07:19.115 Namespace Sharing Capabilities: Private 00:07:19.115 Size (in LBAs): 1048576 (4GiB) 00:07:19.115 Capacity (in LBAs): 1048576 (4GiB) 00:07:19.115 Utilization (in LBAs): 1048576 (4GiB) 00:07:19.115 Thin Provisioning: Not Supported 00:07:19.115 Per-NS Atomic Units: No 00:07:19.115 Maximum Single Source Range Length: 128 00:07:19.115 Maximum Copy Length: 128 00:07:19.115 Maximum Source Range Count: 128 00:07:19.116 NGUID/EUI64 Never Reused: No 00:07:19.116 Namespace Write Protected: No 00:07:19.116 Number of LBA Formats: 8 00:07:19.116 Current LBA Format: LBA Format #04 00:07:19.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.116 00:07:19.116 NVM Specific Namespace Data 00:07:19.116 =========================== 00:07:19.116 Logical Block Storage Tag Mask: 0 00:07:19.116 Protection Information Capabilities: 00:07:19.116 16b Guard Protection Information Storage Tag Support: No 00:07:19.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.116 Storage Tag Check Read Support: No 00:07:19.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Namespace ID:3 00:07:19.116 Error Recovery Timeout: Unlimited 00:07:19.116 Command Set Identifier: NVM (00h) 00:07:19.116 Deallocate: Supported 00:07:19.116 Deallocated/Unwritten Error: Supported 00:07:19.116 Deallocated Read Value: All 0x00 00:07:19.116 Deallocate in Write Zeroes: Not Supported 00:07:19.116 Deallocated Guard Field: 0xFFFF 00:07:19.116 Flush: Supported 00:07:19.116 Reservation: Not Supported 00:07:19.116 Namespace Sharing Capabilities: Private 00:07:19.116 Size (in LBAs): 1048576 (4GiB) 00:07:19.116 Capacity (in LBAs): 1048576 (4GiB) 00:07:19.116 Utilization (in LBAs): 1048576 (4GiB) 00:07:19.116 Thin Provisioning: Not Supported 00:07:19.116 Per-NS Atomic Units: No 00:07:19.116 Maximum Single Source Range Length: 128 00:07:19.116 Maximum Copy Length: 128 00:07:19.116 Maximum Source Range Count: 128 00:07:19.116 NGUID/EUI64 Never Reused: No 00:07:19.116 Namespace Write Protected: No 00:07:19.116 Number of LBA Formats: 8 00:07:19.116 Current LBA Format: LBA Format #04 00:07:19.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.116 00:07:19.116 NVM Specific Namespace Data 00:07:19.116 =========================== 00:07:19.116 Logical Block Storage Tag Mask: 0 00:07:19.116 Protection Information Capabilities: 00:07:19.116 16b Guard Protection Information Storage Tag Support: No 00:07:19.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.116 Storage Tag Check Read Support: No 00:07:19.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.116 09:39:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:19.116 09:39:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:19.378 ===================================================== 00:07:19.378 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:19.378 ===================================================== 00:07:19.378 Controller Capabilities/Features 00:07:19.378 ================================ 00:07:19.378 Vendor ID: 1b36 00:07:19.378 Subsystem Vendor ID: 1af4 00:07:19.379 Serial Number: 12343 00:07:19.379 Model Number: QEMU NVMe Ctrl 00:07:19.379 Firmware Version: 8.0.0 00:07:19.379 Recommended Arb Burst: 6 00:07:19.379 IEEE OUI Identifier: 00 54 52 00:07:19.379 Multi-path I/O 00:07:19.379 May have multiple subsystem ports: No 00:07:19.379 May have multiple controllers: Yes 00:07:19.379 Associated with SR-IOV VF: No 00:07:19.379 Max Data Transfer Size: 524288 00:07:19.379 Max Number of Namespaces: 256 00:07:19.379 Max Number of I/O Queues: 64 00:07:19.379 NVMe Specification Version (VS): 1.4 00:07:19.379 NVMe Specification Version (Identify): 1.4 00:07:19.379 Maximum Queue Entries: 2048 00:07:19.379 Contiguous Queues Required: Yes 00:07:19.379 Arbitration Mechanisms Supported 00:07:19.379 Weighted Round Robin: Not Supported 00:07:19.379 Vendor Specific: Not Supported 00:07:19.379 Reset Timeout: 7500 ms 00:07:19.379 Doorbell Stride: 4 bytes 00:07:19.379 NVM Subsystem Reset: Not Supported 00:07:19.379 Command Sets Supported 00:07:19.379 NVM Command Set: Supported 00:07:19.379 Boot Partition: Not Supported 00:07:19.379 Memory Page Size Minimum: 4096 bytes 00:07:19.379 Memory Page Size Maximum: 65536 bytes 00:07:19.379 Persistent Memory Region: Not Supported 00:07:19.379 Optional Asynchronous Events Supported 00:07:19.379 Namespace Attribute Notices: Supported 00:07:19.379 Firmware Activation Notices: Not Supported 00:07:19.379 ANA Change Notices: Not Supported 00:07:19.379 PLE Aggregate Log Change Notices: Not Supported 00:07:19.379 LBA Status Info Alert Notices: Not Supported 00:07:19.379 EGE Aggregate Log Change Notices: Not Supported 00:07:19.379 Normal NVM Subsystem Shutdown event: Not Supported 00:07:19.379 Zone Descriptor Change Notices: Not Supported 00:07:19.379 Discovery Log Change Notices: Not Supported 00:07:19.379 Controller Attributes 00:07:19.379 128-bit Host Identifier: Not Supported 00:07:19.379 Non-Operational Permissive Mode: Not Supported 00:07:19.379 NVM Sets: Not Supported 00:07:19.379 Read Recovery Levels: Not Supported 00:07:19.379 Endurance Groups: Supported 00:07:19.379 Predictable Latency Mode: Not Supported 00:07:19.379 Traffic Based Keep ALive: Not Supported 00:07:19.379 Namespace Granularity: Not Supported 00:07:19.379 SQ Associations: Not Supported 00:07:19.379 UUID List: Not Supported 00:07:19.379 Multi-Domain Subsystem: Not Supported 00:07:19.379 Fixed Capacity Management: Not Supported 00:07:19.379 Variable Capacity Management: Not Supported 00:07:19.379 Delete Endurance Group: Not Supported 00:07:19.379 Delete NVM Set: Not Supported 00:07:19.379 Extended LBA Formats Supported: Supported 00:07:19.379 Flexible Data Placement Supported: Supported 00:07:19.379 00:07:19.379 Controller Memory Buffer Support 00:07:19.379 ================================ 00:07:19.379 Supported: No 00:07:19.379 00:07:19.379 Persistent Memory Region Support 00:07:19.379 ================================ 00:07:19.379 Supported: No 00:07:19.379 00:07:19.379 Admin Command Set Attributes 00:07:19.379 ============================ 00:07:19.379 Security Send/Receive: Not Supported 00:07:19.379 Format NVM: Supported 00:07:19.379 Firmware Activate/Download: Not Supported 00:07:19.379 Namespace Management: Supported 00:07:19.379 Device Self-Test: Not Supported 00:07:19.379 Directives: Supported 00:07:19.379 NVMe-MI: Not Supported 00:07:19.379 Virtualization Management: Not Supported 00:07:19.379 Doorbell Buffer Config: Supported 00:07:19.379 Get LBA Status Capability: Not Supported 00:07:19.379 Command & Feature Lockdown Capability: Not Supported 00:07:19.379 Abort Command Limit: 4 00:07:19.379 Async Event Request Limit: 4 00:07:19.379 Number of Firmware Slots: N/A 00:07:19.379 Firmware Slot 1 Read-Only: N/A 00:07:19.379 Firmware Activation Without Reset: N/A 00:07:19.379 Multiple Update Detection Support: N/A 00:07:19.379 Firmware Update Granularity: No Information Provided 00:07:19.379 Per-Namespace SMART Log: Yes 00:07:19.379 Asymmetric Namespace Access Log Page: Not Supported 00:07:19.379 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:19.379 Command Effects Log Page: Supported 00:07:19.379 Get Log Page Extended Data: Supported 00:07:19.379 Telemetry Log Pages: Not Supported 00:07:19.379 Persistent Event Log Pages: Not Supported 00:07:19.379 Supported Log Pages Log Page: May Support 00:07:19.379 Commands Supported & Effects Log Page: Not Supported 00:07:19.379 Feature Identifiers & Effects Log Page:May Support 00:07:19.379 NVMe-MI Commands & Effects Log Page: May Support 00:07:19.379 Data Area 4 for Telemetry Log: Not Supported 00:07:19.379 Error Log Page Entries Supported: 1 00:07:19.379 Keep Alive: Not Supported 00:07:19.379 00:07:19.379 NVM Command Set Attributes 00:07:19.379 ========================== 00:07:19.379 Submission Queue Entry Size 00:07:19.379 Max: 64 00:07:19.379 Min: 64 00:07:19.379 Completion Queue Entry Size 00:07:19.379 Max: 16 00:07:19.379 Min: 16 00:07:19.379 Number of Namespaces: 256 00:07:19.379 Compare Command: Supported 00:07:19.379 Write Uncorrectable Command: Not Supported 00:07:19.379 Dataset Management Command: Supported 00:07:19.379 Write Zeroes Command: Supported 00:07:19.379 Set Features Save Field: Supported 00:07:19.379 Reservations: Not Supported 00:07:19.379 Timestamp: Supported 00:07:19.379 Copy: Supported 00:07:19.379 Volatile Write Cache: Present 00:07:19.379 Atomic Write Unit (Normal): 1 00:07:19.379 Atomic Write Unit (PFail): 1 00:07:19.379 Atomic Compare & Write Unit: 1 00:07:19.379 Fused Compare & Write: Not Supported 00:07:19.379 Scatter-Gather List 00:07:19.379 SGL Command Set: Supported 00:07:19.379 SGL Keyed: Not Supported 00:07:19.379 SGL Bit Bucket Descriptor: Not Supported 00:07:19.379 SGL Metadata Pointer: Not Supported 00:07:19.379 Oversized SGL: Not Supported 00:07:19.379 SGL Metadata Address: Not Supported 00:07:19.379 SGL Offset: Not Supported 00:07:19.379 Transport SGL Data Block: Not Supported 00:07:19.379 Replay Protected Memory Block: Not Supported 00:07:19.379 00:07:19.379 Firmware Slot Information 00:07:19.379 ========================= 00:07:19.379 Active slot: 1 00:07:19.379 Slot 1 Firmware Revision: 1.0 00:07:19.379 00:07:19.379 00:07:19.379 Commands Supported and Effects 00:07:19.379 ============================== 00:07:19.379 Admin Commands 00:07:19.379 -------------- 00:07:19.379 Delete I/O Submission Queue (00h): Supported 00:07:19.379 Create I/O Submission Queue (01h): Supported 00:07:19.379 Get Log Page (02h): Supported 00:07:19.379 Delete I/O Completion Queue (04h): Supported 00:07:19.379 Create I/O Completion Queue (05h): Supported 00:07:19.379 Identify (06h): Supported 00:07:19.379 Abort (08h): Supported 00:07:19.380 Set Features (09h): Supported 00:07:19.380 Get Features (0Ah): Supported 00:07:19.380 Asynchronous Event Request (0Ch): Supported 00:07:19.380 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:19.380 Directive Send (19h): Supported 00:07:19.380 Directive Receive (1Ah): Supported 00:07:19.380 Virtualization Management (1Ch): Supported 00:07:19.380 Doorbell Buffer Config (7Ch): Supported 00:07:19.380 Format NVM (80h): Supported LBA-Change 00:07:19.380 I/O Commands 00:07:19.380 ------------ 00:07:19.380 Flush (00h): Supported LBA-Change 00:07:19.380 Write (01h): Supported LBA-Change 00:07:19.380 Read (02h): Supported 00:07:19.380 Compare (05h): Supported 00:07:19.380 Write Zeroes (08h): Supported LBA-Change 00:07:19.380 Dataset Management (09h): Supported LBA-Change 00:07:19.380 Unknown (0Ch): Supported 00:07:19.380 Unknown (12h): Supported 00:07:19.380 Copy (19h): Supported LBA-Change 00:07:19.380 Unknown (1Dh): Supported LBA-Change 00:07:19.380 00:07:19.380 Error Log 00:07:19.380 ========= 00:07:19.380 00:07:19.380 Arbitration 00:07:19.380 =========== 00:07:19.380 Arbitration Burst: no limit 00:07:19.380 00:07:19.380 Power Management 00:07:19.380 ================ 00:07:19.380 Number of Power States: 1 00:07:19.380 Current Power State: Power State #0 00:07:19.380 Power State #0: 00:07:19.380 Max Power: 25.00 W 00:07:19.380 Non-Operational State: Operational 00:07:19.380 Entry Latency: 16 microseconds 00:07:19.380 Exit Latency: 4 microseconds 00:07:19.380 Relative Read Throughput: 0 00:07:19.380 Relative Read Latency: 0 00:07:19.380 Relative Write Throughput: 0 00:07:19.380 Relative Write Latency: 0 00:07:19.380 Idle Power: Not Reported 00:07:19.380 Active Power: Not Reported 00:07:19.380 Non-Operational Permissive Mode: Not Supported 00:07:19.380 00:07:19.380 Health Information 00:07:19.380 ================== 00:07:19.380 Critical Warnings: 00:07:19.380 Available Spare Space: OK 00:07:19.380 Temperature: OK 00:07:19.380 Device Reliability: OK 00:07:19.380 Read Only: No 00:07:19.380 Volatile Memory Backup: OK 00:07:19.380 Current Temperature: 323 Kelvin (50 Celsius) 00:07:19.380 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:19.380 Available Spare: 0% 00:07:19.380 Available Spare Threshold: 0% 00:07:19.380 Life Percentage Used: 0% 00:07:19.380 Data Units Read: 884 00:07:19.380 Data Units Written: 813 00:07:19.380 Host Read Commands: 42621 00:07:19.380 Host Write Commands: 42045 00:07:19.380 Controller Busy Time: 0 minutes 00:07:19.380 Power Cycles: 0 00:07:19.380 Power On Hours: 0 hours 00:07:19.380 Unsafe Shutdowns: 0 00:07:19.380 Unrecoverable Media Errors: 0 00:07:19.380 Lifetime Error Log Entries: 0 00:07:19.380 Warning Temperature Time: 0 minutes 00:07:19.380 Critical Temperature Time: 0 minutes 00:07:19.380 00:07:19.380 Number of Queues 00:07:19.380 ================ 00:07:19.380 Number of I/O Submission Queues: 64 00:07:19.380 Number of I/O Completion Queues: 64 00:07:19.380 00:07:19.380 ZNS Specific Controller Data 00:07:19.380 ============================ 00:07:19.380 Zone Append Size Limit: 0 00:07:19.380 00:07:19.380 00:07:19.380 Active Namespaces 00:07:19.380 ================= 00:07:19.380 Namespace ID:1 00:07:19.380 Error Recovery Timeout: Unlimited 00:07:19.380 Command Set Identifier: NVM (00h) 00:07:19.380 Deallocate: Supported 00:07:19.380 Deallocated/Unwritten Error: Supported 00:07:19.380 Deallocated Read Value: All 0x00 00:07:19.380 Deallocate in Write Zeroes: Not Supported 00:07:19.380 Deallocated Guard Field: 0xFFFF 00:07:19.380 Flush: Supported 00:07:19.380 Reservation: Not Supported 00:07:19.380 Namespace Sharing Capabilities: Multiple Controllers 00:07:19.380 Size (in LBAs): 262144 (1GiB) 00:07:19.380 Capacity (in LBAs): 262144 (1GiB) 00:07:19.380 Utilization (in LBAs): 262144 (1GiB) 00:07:19.380 Thin Provisioning: Not Supported 00:07:19.380 Per-NS Atomic Units: No 00:07:19.380 Maximum Single Source Range Length: 128 00:07:19.380 Maximum Copy Length: 128 00:07:19.380 Maximum Source Range Count: 128 00:07:19.380 NGUID/EUI64 Never Reused: No 00:07:19.380 Namespace Write Protected: No 00:07:19.380 Endurance group ID: 1 00:07:19.380 Number of LBA Formats: 8 00:07:19.380 Current LBA Format: LBA Format #04 00:07:19.380 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:19.380 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:19.380 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:19.380 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:19.380 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:19.380 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:19.380 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:19.380 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:19.380 00:07:19.380 Get Feature FDP: 00:07:19.380 ================ 00:07:19.380 Enabled: Yes 00:07:19.380 FDP configuration index: 0 00:07:19.380 00:07:19.380 FDP configurations log page 00:07:19.380 =========================== 00:07:19.380 Number of FDP configurations: 1 00:07:19.380 Version: 0 00:07:19.380 Size: 112 00:07:19.380 FDP Configuration Descriptor: 0 00:07:19.380 Descriptor Size: 96 00:07:19.380 Reclaim Group Identifier format: 2 00:07:19.380 FDP Volatile Write Cache: Not Present 00:07:19.380 FDP Configuration: Valid 00:07:19.380 Vendor Specific Size: 0 00:07:19.380 Number of Reclaim Groups: 2 00:07:19.380 Number of Recalim Unit Handles: 8 00:07:19.380 Max Placement Identifiers: 128 00:07:19.380 Number of Namespaces Suppprted: 256 00:07:19.380 Reclaim unit Nominal Size: 6000000 bytes 00:07:19.380 Estimated Reclaim Unit Time Limit: Not Reported 00:07:19.380 RUH Desc #000: RUH Type: Initially Isolated 00:07:19.380 RUH Desc #001: RUH Type: Initially Isolated 00:07:19.380 RUH Desc #002: RUH Type: Initially Isolated 00:07:19.380 RUH Desc #003: RUH Type: Initially Isolated 00:07:19.380 RUH Desc #004: RUH Type: Initially Isolated 00:07:19.380 RUH Desc #005: RUH Type: Initially Isolated 00:07:19.380 RUH Desc #006: RUH Type: Initially Isolated 00:07:19.380 RUH Desc #007: RUH Type: Initially Isolated 00:07:19.380 00:07:19.380 FDP reclaim unit handle usage log page 00:07:19.380 ====================================== 00:07:19.381 Number of Reclaim Unit Handles: 8 00:07:19.381 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:19.381 RUH Usage Desc #001: RUH Attributes: Unused 00:07:19.381 RUH Usage Desc #002: RUH Attributes: Unused 00:07:19.381 RUH Usage Desc #003: RUH Attributes: Unused 00:07:19.381 RUH Usage Desc #004: RUH Attributes: Unused 00:07:19.381 RUH Usage Desc #005: RUH Attributes: Unused 00:07:19.381 RUH Usage Desc #006: RUH Attributes: Unused 00:07:19.381 RUH Usage Desc #007: RUH Attributes: Unused 00:07:19.381 00:07:19.381 FDP statistics log page 00:07:19.381 ======================= 00:07:19.381 Host bytes with metadata written: 508928000 00:07:19.381 Media bytes with metadata written: 508985344 00:07:19.381 Media bytes erased: 0 00:07:19.381 00:07:19.381 FDP events log page 00:07:19.381 =================== 00:07:19.381 Number of FDP events: 0 00:07:19.381 00:07:19.381 NVM Specific Namespace Data 00:07:19.381 =========================== 00:07:19.381 Logical Block Storage Tag Mask: 0 00:07:19.381 Protection Information Capabilities: 00:07:19.381 16b Guard Protection Information Storage Tag Support: No 00:07:19.381 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:19.381 Storage Tag Check Read Support: No 00:07:19.381 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.381 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.381 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.381 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.381 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.381 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.381 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.381 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:19.381 00:07:19.381 real 0m1.189s 00:07:19.381 user 0m0.454s 00:07:19.381 sys 0m0.518s 00:07:19.381 09:39:06 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.381 09:39:06 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:19.381 ************************************ 00:07:19.381 END TEST nvme_identify 00:07:19.381 ************************************ 00:07:19.381 09:39:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:19.381 09:39:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.381 09:39:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.381 09:39:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.381 ************************************ 00:07:19.381 START TEST nvme_perf 00:07:19.381 ************************************ 00:07:19.381 09:39:06 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:19.381 09:39:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:20.772 Initializing NVMe Controllers 00:07:20.772 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:20.772 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:20.772 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:20.772 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:20.772 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:20.772 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:20.772 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:20.772 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:20.772 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:20.772 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:20.772 Initialization complete. Launching workers. 00:07:20.772 ======================================================== 00:07:20.772 Latency(us) 00:07:20.772 Device Information : IOPS MiB/s Average min max 00:07:20.772 PCIE (0000:00:10.0) NSID 1 from core 0: 15470.04 181.29 8285.30 5702.71 33530.63 00:07:20.772 PCIE (0000:00:11.0) NSID 1 from core 0: 15470.04 181.29 8274.18 5747.58 31766.92 00:07:20.772 PCIE (0000:00:13.0) NSID 1 from core 0: 15470.04 181.29 8261.77 5701.77 30444.44 00:07:20.772 PCIE (0000:00:12.0) NSID 1 from core 0: 15470.04 181.29 8249.10 5778.54 28821.42 00:07:20.772 PCIE (0000:00:12.0) NSID 2 from core 0: 15470.04 181.29 8236.22 5789.51 27123.46 00:07:20.772 PCIE (0000:00:12.0) NSID 3 from core 0: 15533.97 182.04 8189.90 5772.24 21675.95 00:07:20.772 ======================================================== 00:07:20.772 Total : 92884.16 1088.49 8249.37 5701.77 33530.63 00:07:20.772 00:07:20.772 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:20.772 ================================================================================= 00:07:20.772 1.00000% : 5973.858us 00:07:20.772 10.00000% : 6503.188us 00:07:20.772 25.00000% : 6956.898us 00:07:20.772 50.00000% : 7561.846us 00:07:20.772 75.00000% : 8973.391us 00:07:20.772 90.00000% : 10586.585us 00:07:20.772 95.00000% : 12149.366us 00:07:20.772 98.00000% : 13712.148us 00:07:20.772 99.00000% : 14821.218us 00:07:20.772 99.50000% : 28634.191us 00:07:20.772 99.90000% : 33272.123us 00:07:20.772 99.99000% : 33675.422us 00:07:20.772 99.99900% : 33675.422us 00:07:20.772 99.99990% : 33675.422us 00:07:20.772 99.99999% : 33675.422us 00:07:20.772 00:07:20.772 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:20.772 ================================================================================= 00:07:20.772 1.00000% : 6024.271us 00:07:20.772 10.00000% : 6553.600us 00:07:20.772 25.00000% : 6956.898us 00:07:20.772 50.00000% : 7561.846us 00:07:20.772 75.00000% : 9023.803us 00:07:20.772 90.00000% : 10687.409us 00:07:20.772 95.00000% : 12300.603us 00:07:20.772 98.00000% : 13712.148us 00:07:20.772 99.00000% : 15022.868us 00:07:20.772 99.50000% : 26819.348us 00:07:20.772 99.90000% : 31457.280us 00:07:20.772 99.99000% : 31860.578us 00:07:20.772 99.99900% : 31860.578us 00:07:20.772 99.99990% : 31860.578us 00:07:20.772 99.99999% : 31860.578us 00:07:20.772 00:07:20.772 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:20.772 ================================================================================= 00:07:20.772 1.00000% : 5999.065us 00:07:20.772 10.00000% : 6553.600us 00:07:20.772 25.00000% : 6956.898us 00:07:20.772 50.00000% : 7561.846us 00:07:20.772 75.00000% : 9023.803us 00:07:20.772 90.00000% : 10737.822us 00:07:20.772 95.00000% : 11998.129us 00:07:20.772 98.00000% : 13409.674us 00:07:20.772 99.00000% : 14922.043us 00:07:20.772 99.50000% : 25508.628us 00:07:20.772 99.90000% : 30247.385us 00:07:20.772 99.99000% : 30449.034us 00:07:20.772 99.99900% : 30449.034us 00:07:20.772 99.99990% : 30449.034us 00:07:20.772 99.99999% : 30449.034us 00:07:20.772 00:07:20.772 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:20.772 ================================================================================= 00:07:20.772 1.00000% : 5999.065us 00:07:20.772 10.00000% : 6553.600us 00:07:20.772 25.00000% : 6956.898us 00:07:20.772 50.00000% : 7561.846us 00:07:20.772 75.00000% : 8973.391us 00:07:20.772 90.00000% : 10737.822us 00:07:20.772 95.00000% : 11998.129us 00:07:20.772 98.00000% : 13510.498us 00:07:20.772 99.00000% : 14821.218us 00:07:20.772 99.50000% : 23693.785us 00:07:20.772 99.90000% : 28634.191us 00:07:20.772 99.99000% : 28835.840us 00:07:20.772 99.99900% : 28835.840us 00:07:20.772 99.99990% : 28835.840us 00:07:20.772 99.99999% : 28835.840us 00:07:20.772 00:07:20.772 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:20.772 ================================================================================= 00:07:20.772 1.00000% : 6024.271us 00:07:20.772 10.00000% : 6553.600us 00:07:20.772 25.00000% : 6956.898us 00:07:20.772 50.00000% : 7612.258us 00:07:20.772 75.00000% : 8973.391us 00:07:20.772 90.00000% : 10687.409us 00:07:20.772 95.00000% : 11998.129us 00:07:20.772 98.00000% : 13611.323us 00:07:20.772 99.00000% : 14720.394us 00:07:20.772 99.50000% : 21979.766us 00:07:20.772 99.90000% : 26819.348us 00:07:20.772 99.99000% : 27222.646us 00:07:20.772 99.99900% : 27222.646us 00:07:20.772 99.99990% : 27222.646us 00:07:20.772 99.99999% : 27222.646us 00:07:20.772 00:07:20.772 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:20.772 ================================================================================= 00:07:20.772 1.00000% : 6024.271us 00:07:20.772 10.00000% : 6553.600us 00:07:20.772 25.00000% : 6956.898us 00:07:20.772 50.00000% : 7612.258us 00:07:20.772 75.00000% : 8973.391us 00:07:20.772 90.00000% : 10485.760us 00:07:20.772 95.00000% : 11998.129us 00:07:20.772 98.00000% : 13812.972us 00:07:20.772 99.00000% : 14821.218us 00:07:20.772 99.50000% : 16736.886us 00:07:20.772 99.90000% : 21374.818us 00:07:20.772 99.99000% : 21677.292us 00:07:20.772 99.99900% : 21677.292us 00:07:20.772 99.99990% : 21677.292us 00:07:20.772 99.99999% : 21677.292us 00:07:20.772 00:07:20.772 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:20.772 ============================================================================== 00:07:20.772 Range in us Cumulative IO count 00:07:20.772 5696.591 - 5721.797: 0.0323% ( 5) 00:07:20.772 5721.797 - 5747.003: 0.1033% ( 11) 00:07:20.772 5747.003 - 5772.209: 0.2066% ( 16) 00:07:20.772 5772.209 - 5797.415: 0.2389% ( 5) 00:07:20.772 5797.415 - 5822.622: 0.3164% ( 12) 00:07:20.772 5822.622 - 5847.828: 0.4132% ( 15) 00:07:20.772 5847.828 - 5873.034: 0.5036% ( 14) 00:07:20.772 5873.034 - 5898.240: 0.6134% ( 17) 00:07:20.772 5898.240 - 5923.446: 0.7683% ( 24) 00:07:20.772 5923.446 - 5948.652: 0.9233% ( 24) 00:07:20.772 5948.652 - 5973.858: 1.0847% ( 25) 00:07:20.772 5973.858 - 5999.065: 1.3107% ( 35) 00:07:20.772 5999.065 - 6024.271: 1.5044% ( 30) 00:07:20.772 6024.271 - 6049.477: 1.7691% ( 41) 00:07:20.772 6049.477 - 6074.683: 2.0080% ( 37) 00:07:20.772 6074.683 - 6099.889: 2.2534% ( 38) 00:07:20.772 6099.889 - 6125.095: 2.5116% ( 40) 00:07:20.772 6125.095 - 6150.302: 2.7957% ( 44) 00:07:20.772 6150.302 - 6175.508: 3.1896% ( 61) 00:07:20.772 6175.508 - 6200.714: 3.5963% ( 63) 00:07:20.772 6200.714 - 6225.920: 3.9773% ( 59) 00:07:20.772 6225.920 - 6251.126: 4.4292% ( 70) 00:07:20.772 6251.126 - 6276.332: 4.8618% ( 67) 00:07:20.772 6276.332 - 6301.538: 5.4365% ( 89) 00:07:20.772 6301.538 - 6326.745: 5.8949% ( 71) 00:07:20.772 6326.745 - 6351.951: 6.4308% ( 83) 00:07:20.772 6351.951 - 6377.157: 7.0119% ( 90) 00:07:20.773 6377.157 - 6402.363: 7.5930% ( 90) 00:07:20.773 6402.363 - 6427.569: 8.2064% ( 95) 00:07:20.773 6427.569 - 6452.775: 8.7939% ( 91) 00:07:20.773 6452.775 - 6503.188: 10.1756% ( 214) 00:07:20.773 6503.188 - 6553.600: 11.6477% ( 228) 00:07:20.773 6553.600 - 6604.012: 13.1844% ( 238) 00:07:20.773 6604.012 - 6654.425: 14.8308% ( 255) 00:07:20.773 6654.425 - 6704.837: 16.6258% ( 278) 00:07:20.773 6704.837 - 6755.249: 18.4013% ( 275) 00:07:20.773 6755.249 - 6805.662: 20.2479% ( 286) 00:07:20.773 6805.662 - 6856.074: 22.3205% ( 321) 00:07:20.773 6856.074 - 6906.486: 24.3479% ( 314) 00:07:20.773 6906.486 - 6956.898: 26.3365% ( 308) 00:07:20.773 6956.898 - 7007.311: 28.4091% ( 321) 00:07:20.773 7007.311 - 7057.723: 30.5914% ( 338) 00:07:20.773 7057.723 - 7108.135: 32.7092% ( 328) 00:07:20.773 7108.135 - 7158.548: 34.8140% ( 326) 00:07:20.773 7158.548 - 7208.960: 36.9254% ( 327) 00:07:20.773 7208.960 - 7259.372: 38.9398% ( 312) 00:07:20.773 7259.372 - 7309.785: 40.8962% ( 303) 00:07:20.773 7309.785 - 7360.197: 42.8590% ( 304) 00:07:20.773 7360.197 - 7410.609: 44.7637% ( 295) 00:07:20.773 7410.609 - 7461.022: 46.6038% ( 285) 00:07:20.773 7461.022 - 7511.434: 48.3407% ( 269) 00:07:20.773 7511.434 - 7561.846: 50.0839% ( 270) 00:07:20.773 7561.846 - 7612.258: 51.6852% ( 248) 00:07:20.773 7612.258 - 7662.671: 53.3574% ( 259) 00:07:20.773 7662.671 - 7713.083: 54.8747% ( 235) 00:07:20.773 7713.083 - 7763.495: 56.2887% ( 219) 00:07:20.773 7763.495 - 7813.908: 57.6059% ( 204) 00:07:20.773 7813.908 - 7864.320: 58.8004% ( 185) 00:07:20.773 7864.320 - 7914.732: 60.0207% ( 189) 00:07:20.773 7914.732 - 7965.145: 61.0473% ( 159) 00:07:20.773 7965.145 - 8015.557: 62.1384% ( 169) 00:07:20.773 8015.557 - 8065.969: 63.0617% ( 143) 00:07:20.773 8065.969 - 8116.382: 63.8946% ( 129) 00:07:20.773 8116.382 - 8166.794: 64.7404% ( 131) 00:07:20.773 8166.794 - 8217.206: 65.5088% ( 119) 00:07:20.773 8217.206 - 8267.618: 66.2900% ( 121) 00:07:20.773 8267.618 - 8318.031: 67.0261% ( 114) 00:07:20.773 8318.031 - 8368.443: 67.6976% ( 104) 00:07:20.773 8368.443 - 8418.855: 68.4013% ( 109) 00:07:20.773 8418.855 - 8469.268: 69.0276% ( 97) 00:07:20.773 8469.268 - 8519.680: 69.7185% ( 107) 00:07:20.773 8519.680 - 8570.092: 70.3190% ( 93) 00:07:20.773 8570.092 - 8620.505: 71.0034% ( 106) 00:07:20.773 8620.505 - 8670.917: 71.5845% ( 90) 00:07:20.773 8670.917 - 8721.329: 72.1591% ( 89) 00:07:20.773 8721.329 - 8771.742: 72.8306% ( 104) 00:07:20.773 8771.742 - 8822.154: 73.4440% ( 95) 00:07:20.773 8822.154 - 8872.566: 74.0509% ( 94) 00:07:20.773 8872.566 - 8922.978: 74.5545% ( 78) 00:07:20.773 8922.978 - 8973.391: 75.1808% ( 97) 00:07:20.773 8973.391 - 9023.803: 75.7296% ( 85) 00:07:20.773 9023.803 - 9074.215: 76.2784% ( 85) 00:07:20.773 9074.215 - 9124.628: 76.8208% ( 84) 00:07:20.773 9124.628 - 9175.040: 77.3373% ( 80) 00:07:20.773 9175.040 - 9225.452: 77.8861% ( 85) 00:07:20.773 9225.452 - 9275.865: 78.4349% ( 85) 00:07:20.773 9275.865 - 9326.277: 79.0160% ( 90) 00:07:20.773 9326.277 - 9376.689: 79.5777% ( 87) 00:07:20.773 9376.689 - 9427.102: 80.1201% ( 84) 00:07:20.773 9427.102 - 9477.514: 80.6431% ( 81) 00:07:20.773 9477.514 - 9527.926: 81.0821% ( 68) 00:07:20.773 9527.926 - 9578.338: 81.6051% ( 81) 00:07:20.773 9578.338 - 9628.751: 82.0442% ( 68) 00:07:20.773 9628.751 - 9679.163: 82.5607% ( 80) 00:07:20.773 9679.163 - 9729.575: 82.9933% ( 67) 00:07:20.773 9729.575 - 9779.988: 83.4840% ( 76) 00:07:20.773 9779.988 - 9830.400: 83.9553% ( 73) 00:07:20.773 9830.400 - 9880.812: 84.4331% ( 74) 00:07:20.773 9880.812 - 9931.225: 84.8980% ( 72) 00:07:20.773 9931.225 - 9981.637: 85.2854% ( 60) 00:07:20.773 9981.637 - 10032.049: 85.7438% ( 71) 00:07:20.773 10032.049 - 10082.462: 86.1570% ( 64) 00:07:20.773 10082.462 - 10132.874: 86.6284% ( 73) 00:07:20.773 10132.874 - 10183.286: 87.1320% ( 78) 00:07:20.773 10183.286 - 10233.698: 87.5646% ( 67) 00:07:20.773 10233.698 - 10284.111: 88.0165% ( 70) 00:07:20.773 10284.111 - 10334.523: 88.3910% ( 58) 00:07:20.773 10334.523 - 10384.935: 88.7138% ( 50) 00:07:20.773 10384.935 - 10435.348: 89.1077% ( 61) 00:07:20.773 10435.348 - 10485.760: 89.4370% ( 51) 00:07:20.773 10485.760 - 10536.172: 89.6952% ( 40) 00:07:20.773 10536.172 - 10586.585: 90.0568% ( 56) 00:07:20.773 10586.585 - 10636.997: 90.3538% ( 46) 00:07:20.773 10636.997 - 10687.409: 90.6702% ( 49) 00:07:20.773 10687.409 - 10737.822: 90.9866% ( 49) 00:07:20.773 10737.822 - 10788.234: 91.2448% ( 40) 00:07:20.773 10788.234 - 10838.646: 91.5935% ( 54) 00:07:20.773 10838.646 - 10889.058: 91.8518% ( 40) 00:07:20.773 10889.058 - 10939.471: 92.1036% ( 39) 00:07:20.773 10939.471 - 10989.883: 92.3489% ( 38) 00:07:20.773 10989.883 - 11040.295: 92.5878% ( 37) 00:07:20.773 11040.295 - 11090.708: 92.8590% ( 42) 00:07:20.773 11090.708 - 11141.120: 93.0850% ( 35) 00:07:20.773 11141.120 - 11191.532: 93.2593% ( 27) 00:07:20.773 11191.532 - 11241.945: 93.4143% ( 24) 00:07:20.773 11241.945 - 11292.357: 93.5757% ( 25) 00:07:20.773 11292.357 - 11342.769: 93.6983% ( 19) 00:07:20.773 11342.769 - 11393.182: 93.8339% ( 21) 00:07:20.773 11393.182 - 11443.594: 93.9243% ( 14) 00:07:20.773 11443.594 - 11494.006: 94.0212% ( 15) 00:07:20.773 11494.006 - 11544.418: 94.1245% ( 16) 00:07:20.773 11544.418 - 11594.831: 94.1955% ( 11) 00:07:20.773 11594.831 - 11645.243: 94.2859% ( 14) 00:07:20.773 11645.243 - 11695.655: 94.3634% ( 12) 00:07:20.773 11695.655 - 11746.068: 94.4086% ( 7) 00:07:20.773 11746.068 - 11796.480: 94.5119% ( 16) 00:07:20.773 11796.480 - 11846.892: 94.5829% ( 11) 00:07:20.773 11846.892 - 11897.305: 94.6346% ( 8) 00:07:20.773 11897.305 - 11947.717: 94.7056% ( 11) 00:07:20.773 11947.717 - 11998.129: 94.7766% ( 11) 00:07:20.773 11998.129 - 12048.542: 94.8412% ( 10) 00:07:20.773 12048.542 - 12098.954: 94.9316% ( 14) 00:07:20.773 12098.954 - 12149.366: 95.0284% ( 15) 00:07:20.773 12149.366 - 12199.778: 95.1253% ( 15) 00:07:20.773 12199.778 - 12250.191: 95.2350% ( 17) 00:07:20.773 12250.191 - 12300.603: 95.3448% ( 17) 00:07:20.773 12300.603 - 12351.015: 95.4352% ( 14) 00:07:20.773 12351.015 - 12401.428: 95.5514% ( 18) 00:07:20.773 12401.428 - 12451.840: 95.6612% ( 17) 00:07:20.773 12451.840 - 12502.252: 95.7903% ( 20) 00:07:20.773 12502.252 - 12552.665: 95.8807% ( 14) 00:07:20.773 12552.665 - 12603.077: 95.9969% ( 18) 00:07:20.773 12603.077 - 12653.489: 96.1196% ( 19) 00:07:20.773 12653.489 - 12703.902: 96.2293% ( 17) 00:07:20.773 12703.902 - 12754.314: 96.3391% ( 17) 00:07:20.773 12754.314 - 12804.726: 96.4553% ( 18) 00:07:20.773 12804.726 - 12855.138: 96.5586% ( 16) 00:07:20.773 12855.138 - 12905.551: 96.6555% ( 15) 00:07:20.773 12905.551 - 13006.375: 96.8621% ( 32) 00:07:20.773 13006.375 - 13107.200: 97.0881% ( 35) 00:07:20.773 13107.200 - 13208.025: 97.2882% ( 31) 00:07:20.773 13208.025 - 13308.849: 97.4432% ( 24) 00:07:20.773 13308.849 - 13409.674: 97.6240% ( 28) 00:07:20.773 13409.674 - 13510.498: 97.7531% ( 20) 00:07:20.773 13510.498 - 13611.323: 97.8758% ( 19) 00:07:20.773 13611.323 - 13712.148: 98.0049% ( 20) 00:07:20.773 13712.148 - 13812.972: 98.1470% ( 22) 00:07:20.773 13812.972 - 13913.797: 98.2890% ( 22) 00:07:20.773 13913.797 - 14014.622: 98.3988% ( 17) 00:07:20.773 14014.622 - 14115.446: 98.4569% ( 9) 00:07:20.773 14115.446 - 14216.271: 98.5860% ( 20) 00:07:20.773 14216.271 - 14317.095: 98.6893% ( 16) 00:07:20.773 14317.095 - 14417.920: 98.7668% ( 12) 00:07:20.773 14417.920 - 14518.745: 98.8443% ( 12) 00:07:20.773 14518.745 - 14619.569: 98.9153% ( 11) 00:07:20.773 14619.569 - 14720.394: 98.9799% ( 10) 00:07:20.773 14720.394 - 14821.218: 99.0121% ( 5) 00:07:20.773 14821.218 - 14922.043: 99.0509% ( 6) 00:07:20.773 14922.043 - 15022.868: 99.0767% ( 4) 00:07:20.773 15022.868 - 15123.692: 99.1090% ( 5) 00:07:20.773 15123.692 - 15224.517: 99.1413% ( 5) 00:07:20.773 15224.517 - 15325.342: 99.1736% ( 5) 00:07:20.773 27020.997 - 27222.646: 99.1865% ( 2) 00:07:20.773 27222.646 - 27424.295: 99.2317% ( 7) 00:07:20.773 27424.295 - 27625.945: 99.2769% ( 7) 00:07:20.773 27625.945 - 27827.594: 99.3285% ( 8) 00:07:20.773 27827.594 - 28029.243: 99.3866% ( 9) 00:07:20.773 28029.243 - 28230.892: 99.4318% ( 7) 00:07:20.773 28230.892 - 28432.542: 99.4899% ( 9) 00:07:20.773 28432.542 - 28634.191: 99.5416% ( 8) 00:07:20.773 28634.191 - 28835.840: 99.5868% ( 7) 00:07:20.773 31860.578 - 32062.228: 99.6191% ( 5) 00:07:20.773 32062.228 - 32263.877: 99.6707% ( 8) 00:07:20.773 32263.877 - 32465.526: 99.7224% ( 8) 00:07:20.773 32465.526 - 32667.175: 99.7740% ( 8) 00:07:20.773 32667.175 - 32868.825: 99.8257% ( 8) 00:07:20.773 32868.825 - 33070.474: 99.8838% ( 9) 00:07:20.773 33070.474 - 33272.123: 99.9354% ( 8) 00:07:20.773 33272.123 - 33473.772: 99.9871% ( 8) 00:07:20.773 33473.772 - 33675.422: 100.0000% ( 2) 00:07:20.773 00:07:20.773 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:20.774 ============================================================================== 00:07:20.774 Range in us Cumulative IO count 00:07:20.774 5747.003 - 5772.209: 0.0129% ( 2) 00:07:20.774 5772.209 - 5797.415: 0.0258% ( 2) 00:07:20.774 5797.415 - 5822.622: 0.0904% ( 10) 00:07:20.774 5822.622 - 5847.828: 0.1614% ( 11) 00:07:20.774 5847.828 - 5873.034: 0.2260% ( 10) 00:07:20.774 5873.034 - 5898.240: 0.3164% ( 14) 00:07:20.774 5898.240 - 5923.446: 0.4132% ( 15) 00:07:20.774 5923.446 - 5948.652: 0.5488% ( 21) 00:07:20.774 5948.652 - 5973.858: 0.6844% ( 21) 00:07:20.774 5973.858 - 5999.065: 0.8394% ( 24) 00:07:20.774 5999.065 - 6024.271: 1.0072% ( 26) 00:07:20.774 6024.271 - 6049.477: 1.1945% ( 29) 00:07:20.774 6049.477 - 6074.683: 1.4657% ( 42) 00:07:20.774 6074.683 - 6099.889: 1.7885% ( 50) 00:07:20.774 6099.889 - 6125.095: 2.1049% ( 49) 00:07:20.774 6125.095 - 6150.302: 2.3438% ( 37) 00:07:20.774 6150.302 - 6175.508: 2.6666% ( 50) 00:07:20.774 6175.508 - 6200.714: 2.9571% ( 45) 00:07:20.774 6200.714 - 6225.920: 3.2864% ( 51) 00:07:20.774 6225.920 - 6251.126: 3.6932% ( 63) 00:07:20.774 6251.126 - 6276.332: 4.1968% ( 78) 00:07:20.774 6276.332 - 6301.538: 4.6746% ( 74) 00:07:20.774 6301.538 - 6326.745: 5.2557% ( 90) 00:07:20.774 6326.745 - 6351.951: 5.8432% ( 91) 00:07:20.774 6351.951 - 6377.157: 6.4114% ( 88) 00:07:20.774 6377.157 - 6402.363: 7.0312% ( 96) 00:07:20.774 6402.363 - 6427.569: 7.6317% ( 93) 00:07:20.774 6427.569 - 6452.775: 8.2709% ( 99) 00:07:20.774 6452.775 - 6503.188: 9.6591% ( 215) 00:07:20.774 6503.188 - 6553.600: 11.2474% ( 246) 00:07:20.774 6553.600 - 6604.012: 12.8874% ( 254) 00:07:20.774 6604.012 - 6654.425: 14.6630% ( 275) 00:07:20.774 6654.425 - 6704.837: 16.3869% ( 267) 00:07:20.774 6704.837 - 6755.249: 18.2787% ( 293) 00:07:20.774 6755.249 - 6805.662: 20.2931% ( 312) 00:07:20.774 6805.662 - 6856.074: 22.3786% ( 323) 00:07:20.774 6856.074 - 6906.486: 24.5287% ( 333) 00:07:20.774 6906.486 - 6956.898: 26.6400% ( 327) 00:07:20.774 6956.898 - 7007.311: 28.8933% ( 349) 00:07:20.774 7007.311 - 7057.723: 31.1338% ( 347) 00:07:20.774 7057.723 - 7108.135: 33.3226% ( 339) 00:07:20.774 7108.135 - 7158.548: 35.4920% ( 336) 00:07:20.774 7158.548 - 7208.960: 37.5775% ( 323) 00:07:20.774 7208.960 - 7259.372: 39.7534% ( 337) 00:07:20.774 7259.372 - 7309.785: 41.8001% ( 317) 00:07:20.774 7309.785 - 7360.197: 43.8339% ( 315) 00:07:20.774 7360.197 - 7410.609: 45.7838% ( 302) 00:07:20.774 7410.609 - 7461.022: 47.6498% ( 289) 00:07:20.774 7461.022 - 7511.434: 49.5674% ( 297) 00:07:20.774 7511.434 - 7561.846: 51.4011% ( 284) 00:07:20.774 7561.846 - 7612.258: 53.1379% ( 269) 00:07:20.774 7612.258 - 7662.671: 54.7198% ( 245) 00:07:20.774 7662.671 - 7713.083: 56.2048% ( 230) 00:07:20.774 7713.083 - 7763.495: 57.4768% ( 197) 00:07:20.774 7763.495 - 7813.908: 58.7100% ( 191) 00:07:20.774 7813.908 - 7864.320: 59.9044% ( 185) 00:07:20.774 7864.320 - 7914.732: 61.1247% ( 189) 00:07:20.774 7914.732 - 7965.145: 62.1255% ( 155) 00:07:20.774 7965.145 - 8015.557: 63.0876% ( 149) 00:07:20.774 8015.557 - 8065.969: 63.9205% ( 129) 00:07:20.774 8065.969 - 8116.382: 64.6759% ( 117) 00:07:20.774 8116.382 - 8166.794: 65.4378% ( 118) 00:07:20.774 8166.794 - 8217.206: 66.1092% ( 104) 00:07:20.774 8217.206 - 8267.618: 66.7226% ( 95) 00:07:20.774 8267.618 - 8318.031: 67.3360% ( 95) 00:07:20.774 8318.031 - 8368.443: 67.8913% ( 86) 00:07:20.774 8368.443 - 8418.855: 68.4143% ( 81) 00:07:20.774 8418.855 - 8469.268: 68.9114% ( 77) 00:07:20.774 8469.268 - 8519.680: 69.4861% ( 89) 00:07:20.774 8519.680 - 8570.092: 70.0801% ( 92) 00:07:20.774 8570.092 - 8620.505: 70.6805% ( 93) 00:07:20.774 8620.505 - 8670.917: 71.3197% ( 99) 00:07:20.774 8670.917 - 8721.329: 71.9460% ( 97) 00:07:20.774 8721.329 - 8771.742: 72.5142% ( 88) 00:07:20.774 8771.742 - 8822.154: 73.1018% ( 91) 00:07:20.774 8822.154 - 8872.566: 73.6893% ( 91) 00:07:20.774 8872.566 - 8922.978: 74.2833% ( 92) 00:07:20.774 8922.978 - 8973.391: 74.8386% ( 86) 00:07:20.774 8973.391 - 9023.803: 75.4907% ( 101) 00:07:20.774 9023.803 - 9074.215: 76.1041% ( 95) 00:07:20.774 9074.215 - 9124.628: 76.7433% ( 99) 00:07:20.774 9124.628 - 9175.040: 77.3115% ( 88) 00:07:20.774 9175.040 - 9225.452: 77.9055% ( 92) 00:07:20.774 9225.452 - 9275.865: 78.5124% ( 94) 00:07:20.774 9275.865 - 9326.277: 79.0548% ( 84) 00:07:20.774 9326.277 - 9376.689: 79.5519% ( 77) 00:07:20.774 9376.689 - 9427.102: 80.0491% ( 77) 00:07:20.774 9427.102 - 9477.514: 80.5269% ( 74) 00:07:20.774 9477.514 - 9527.926: 80.9788% ( 70) 00:07:20.774 9527.926 - 9578.338: 81.4631% ( 75) 00:07:20.774 9578.338 - 9628.751: 81.9279% ( 72) 00:07:20.774 9628.751 - 9679.163: 82.4057% ( 74) 00:07:20.774 9679.163 - 9729.575: 82.8383% ( 67) 00:07:20.774 9729.575 - 9779.988: 83.2515% ( 64) 00:07:20.774 9779.988 - 9830.400: 83.6777% ( 66) 00:07:20.774 9830.400 - 9880.812: 84.0522% ( 58) 00:07:20.774 9880.812 - 9931.225: 84.4202% ( 57) 00:07:20.774 9931.225 - 9981.637: 84.8528% ( 67) 00:07:20.774 9981.637 - 10032.049: 85.2918% ( 68) 00:07:20.774 10032.049 - 10082.462: 85.7051% ( 64) 00:07:20.774 10082.462 - 10132.874: 86.0989% ( 61) 00:07:20.774 10132.874 - 10183.286: 86.4928% ( 61) 00:07:20.774 10183.286 - 10233.698: 86.8737% ( 59) 00:07:20.774 10233.698 - 10284.111: 87.3192% ( 69) 00:07:20.774 10284.111 - 10334.523: 87.7260% ( 63) 00:07:20.774 10334.523 - 10384.935: 88.1327% ( 63) 00:07:20.774 10384.935 - 10435.348: 88.5266% ( 61) 00:07:20.774 10435.348 - 10485.760: 88.8494% ( 50) 00:07:20.774 10485.760 - 10536.172: 89.2497% ( 62) 00:07:20.774 10536.172 - 10586.585: 89.5919% ( 53) 00:07:20.774 10586.585 - 10636.997: 89.9406% ( 54) 00:07:20.774 10636.997 - 10687.409: 90.2763% ( 52) 00:07:20.774 10687.409 - 10737.822: 90.6056% ( 51) 00:07:20.774 10737.822 - 10788.234: 90.9349% ( 51) 00:07:20.774 10788.234 - 10838.646: 91.2448% ( 48) 00:07:20.774 10838.646 - 10889.058: 91.5354% ( 45) 00:07:20.774 10889.058 - 10939.471: 91.8840% ( 54) 00:07:20.774 10939.471 - 10989.883: 92.1875% ( 47) 00:07:20.774 10989.883 - 11040.295: 92.4522% ( 41) 00:07:20.774 11040.295 - 11090.708: 92.6911% ( 37) 00:07:20.774 11090.708 - 11141.120: 92.9300% ( 37) 00:07:20.774 11141.120 - 11191.532: 93.1237% ( 30) 00:07:20.774 11191.532 - 11241.945: 93.3174% ( 30) 00:07:20.774 11241.945 - 11292.357: 93.4853% ( 26) 00:07:20.774 11292.357 - 11342.769: 93.6532% ( 26) 00:07:20.774 11342.769 - 11393.182: 93.7952% ( 22) 00:07:20.774 11393.182 - 11443.594: 93.9243% ( 20) 00:07:20.774 11443.594 - 11494.006: 94.0535% ( 20) 00:07:20.774 11494.006 - 11544.418: 94.1632% ( 17) 00:07:20.774 11544.418 - 11594.831: 94.2665% ( 16) 00:07:20.774 11594.831 - 11645.243: 94.3569% ( 14) 00:07:20.774 11645.243 - 11695.655: 94.4215% ( 10) 00:07:20.774 11695.655 - 11746.068: 94.4731% ( 8) 00:07:20.774 11746.068 - 11796.480: 94.5183% ( 7) 00:07:20.774 11796.480 - 11846.892: 94.5635% ( 7) 00:07:20.774 11846.892 - 11897.305: 94.6475% ( 13) 00:07:20.774 11897.305 - 11947.717: 94.7120% ( 10) 00:07:20.774 11947.717 - 11998.129: 94.7637% ( 8) 00:07:20.774 11998.129 - 12048.542: 94.8089% ( 7) 00:07:20.774 12048.542 - 12098.954: 94.8541% ( 7) 00:07:20.774 12098.954 - 12149.366: 94.9057% ( 8) 00:07:20.774 12149.366 - 12199.778: 94.9445% ( 6) 00:07:20.774 12199.778 - 12250.191: 94.9832% ( 6) 00:07:20.774 12250.191 - 12300.603: 95.0284% ( 7) 00:07:20.774 12300.603 - 12351.015: 95.1059% ( 12) 00:07:20.774 12351.015 - 12401.428: 95.1963% ( 14) 00:07:20.774 12401.428 - 12451.840: 95.2996% ( 16) 00:07:20.774 12451.840 - 12502.252: 95.4223% ( 19) 00:07:20.774 12502.252 - 12552.665: 95.5385% ( 18) 00:07:20.774 12552.665 - 12603.077: 95.6353% ( 15) 00:07:20.774 12603.077 - 12653.489: 95.7580% ( 19) 00:07:20.774 12653.489 - 12703.902: 95.9001% ( 22) 00:07:20.774 12703.902 - 12754.314: 96.0486% ( 23) 00:07:20.774 12754.314 - 12804.726: 96.1712% ( 19) 00:07:20.774 12804.726 - 12855.138: 96.3068% ( 21) 00:07:20.774 12855.138 - 12905.551: 96.4424% ( 21) 00:07:20.774 12905.551 - 13006.375: 96.7136% ( 42) 00:07:20.774 13006.375 - 13107.200: 96.9654% ( 39) 00:07:20.774 13107.200 - 13208.025: 97.2172% ( 39) 00:07:20.774 13208.025 - 13308.849: 97.3915% ( 27) 00:07:20.774 13308.849 - 13409.674: 97.5271% ( 21) 00:07:20.774 13409.674 - 13510.498: 97.6885% ( 25) 00:07:20.774 13510.498 - 13611.323: 97.8951% ( 32) 00:07:20.774 13611.323 - 13712.148: 98.0695% ( 27) 00:07:20.774 13712.148 - 13812.972: 98.2051% ( 21) 00:07:20.774 13812.972 - 13913.797: 98.3213% ( 18) 00:07:20.774 13913.797 - 14014.622: 98.4310% ( 17) 00:07:20.774 14014.622 - 14115.446: 98.5150% ( 13) 00:07:20.774 14115.446 - 14216.271: 98.6054% ( 14) 00:07:20.775 14216.271 - 14317.095: 98.6893% ( 13) 00:07:20.775 14317.095 - 14417.920: 98.7410% ( 8) 00:07:20.775 14417.920 - 14518.745: 98.8055% ( 10) 00:07:20.775 14518.745 - 14619.569: 98.8507% ( 7) 00:07:20.775 14619.569 - 14720.394: 98.8895% ( 6) 00:07:20.775 14720.394 - 14821.218: 98.9282% ( 6) 00:07:20.775 14821.218 - 14922.043: 98.9734% ( 7) 00:07:20.775 14922.043 - 15022.868: 99.0121% ( 6) 00:07:20.775 15022.868 - 15123.692: 99.0509% ( 6) 00:07:20.775 15123.692 - 15224.517: 99.0961% ( 7) 00:07:20.775 15224.517 - 15325.342: 99.1348% ( 6) 00:07:20.775 15325.342 - 15426.166: 99.1736% ( 6) 00:07:20.775 25508.628 - 25609.452: 99.1929% ( 3) 00:07:20.775 25609.452 - 25710.277: 99.2188% ( 4) 00:07:20.775 25710.277 - 25811.102: 99.2446% ( 4) 00:07:20.775 25811.102 - 26012.751: 99.3027% ( 9) 00:07:20.775 26012.751 - 26214.400: 99.3543% ( 8) 00:07:20.775 26214.400 - 26416.049: 99.4060% ( 8) 00:07:20.775 26416.049 - 26617.698: 99.4641% ( 9) 00:07:20.775 26617.698 - 26819.348: 99.5158% ( 8) 00:07:20.775 26819.348 - 27020.997: 99.5739% ( 9) 00:07:20.775 27020.997 - 27222.646: 99.5868% ( 2) 00:07:20.775 30247.385 - 30449.034: 99.6384% ( 8) 00:07:20.775 30449.034 - 30650.683: 99.6901% ( 8) 00:07:20.775 30650.683 - 30852.332: 99.7482% ( 9) 00:07:20.775 30852.332 - 31053.982: 99.7998% ( 8) 00:07:20.775 31053.982 - 31255.631: 99.8580% ( 9) 00:07:20.775 31255.631 - 31457.280: 99.9161% ( 9) 00:07:20.775 31457.280 - 31658.929: 99.9677% ( 8) 00:07:20.775 31658.929 - 31860.578: 100.0000% ( 5) 00:07:20.775 00:07:20.775 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:20.775 ============================================================================== 00:07:20.775 Range in us Cumulative IO count 00:07:20.775 5696.591 - 5721.797: 0.0323% ( 5) 00:07:20.775 5747.003 - 5772.209: 0.0452% ( 2) 00:07:20.775 5772.209 - 5797.415: 0.0710% ( 4) 00:07:20.775 5797.415 - 5822.622: 0.1356% ( 10) 00:07:20.775 5822.622 - 5847.828: 0.2131% ( 12) 00:07:20.775 5847.828 - 5873.034: 0.3487% ( 21) 00:07:20.775 5873.034 - 5898.240: 0.4455% ( 15) 00:07:20.775 5898.240 - 5923.446: 0.5876% ( 22) 00:07:20.775 5923.446 - 5948.652: 0.7361% ( 23) 00:07:20.775 5948.652 - 5973.858: 0.9298% ( 30) 00:07:20.775 5973.858 - 5999.065: 1.1170% ( 29) 00:07:20.775 5999.065 - 6024.271: 1.2913% ( 27) 00:07:20.775 6024.271 - 6049.477: 1.4915% ( 31) 00:07:20.775 6049.477 - 6074.683: 1.7433% ( 39) 00:07:20.775 6074.683 - 6099.889: 2.0274% ( 44) 00:07:20.775 6099.889 - 6125.095: 2.3373% ( 48) 00:07:20.775 6125.095 - 6150.302: 2.6472% ( 48) 00:07:20.775 6150.302 - 6175.508: 2.9507% ( 47) 00:07:20.775 6175.508 - 6200.714: 3.2800% ( 51) 00:07:20.775 6200.714 - 6225.920: 3.6092% ( 51) 00:07:20.775 6225.920 - 6251.126: 3.9773% ( 57) 00:07:20.775 6251.126 - 6276.332: 4.4486% ( 73) 00:07:20.775 6276.332 - 6301.538: 4.9522% ( 78) 00:07:20.775 6301.538 - 6326.745: 5.4429% ( 76) 00:07:20.775 6326.745 - 6351.951: 5.9853% ( 84) 00:07:20.775 6351.951 - 6377.157: 6.5341% ( 85) 00:07:20.775 6377.157 - 6402.363: 7.1152% ( 90) 00:07:20.775 6402.363 - 6427.569: 7.7157% ( 93) 00:07:20.775 6427.569 - 6452.775: 8.3871% ( 104) 00:07:20.775 6452.775 - 6503.188: 9.7753% ( 215) 00:07:20.775 6503.188 - 6553.600: 11.3895% ( 250) 00:07:20.775 6553.600 - 6604.012: 13.0488% ( 257) 00:07:20.775 6604.012 - 6654.425: 14.7921% ( 270) 00:07:20.775 6654.425 - 6704.837: 16.6258% ( 284) 00:07:20.775 6704.837 - 6755.249: 18.4530% ( 283) 00:07:20.775 6755.249 - 6805.662: 20.2415% ( 277) 00:07:20.775 6805.662 - 6856.074: 22.1591% ( 297) 00:07:20.775 6856.074 - 6906.486: 24.0702% ( 296) 00:07:20.775 6906.486 - 6956.898: 26.0912% ( 313) 00:07:20.775 6956.898 - 7007.311: 28.2025% ( 327) 00:07:20.775 7007.311 - 7057.723: 30.4300% ( 345) 00:07:20.775 7057.723 - 7108.135: 32.6188% ( 339) 00:07:20.775 7108.135 - 7158.548: 34.8786% ( 350) 00:07:20.775 7158.548 - 7208.960: 37.1578% ( 353) 00:07:20.775 7208.960 - 7259.372: 39.4176% ( 350) 00:07:20.775 7259.372 - 7309.785: 41.6387% ( 344) 00:07:20.775 7309.785 - 7360.197: 43.7435% ( 326) 00:07:20.775 7360.197 - 7410.609: 45.6612% ( 297) 00:07:20.775 7410.609 - 7461.022: 47.5529% ( 293) 00:07:20.775 7461.022 - 7511.434: 49.3737% ( 282) 00:07:20.775 7511.434 - 7561.846: 51.0976% ( 267) 00:07:20.775 7561.846 - 7612.258: 52.7182% ( 251) 00:07:20.775 7612.258 - 7662.671: 54.3130% ( 247) 00:07:20.775 7662.671 - 7713.083: 55.7787% ( 227) 00:07:20.775 7713.083 - 7763.495: 57.1281% ( 209) 00:07:20.775 7763.495 - 7813.908: 58.3678% ( 192) 00:07:20.775 7813.908 - 7864.320: 59.5235% ( 179) 00:07:20.775 7864.320 - 7914.732: 60.5630% ( 161) 00:07:20.775 7914.732 - 7965.145: 61.5832% ( 158) 00:07:20.775 7965.145 - 8015.557: 62.4742% ( 138) 00:07:20.775 8015.557 - 8065.969: 63.3006% ( 128) 00:07:20.775 8065.969 - 8116.382: 64.0044% ( 109) 00:07:20.775 8116.382 - 8166.794: 64.7469% ( 115) 00:07:20.775 8166.794 - 8217.206: 65.4119% ( 103) 00:07:20.775 8217.206 - 8267.618: 66.0318% ( 96) 00:07:20.775 8267.618 - 8318.031: 66.6710% ( 99) 00:07:20.775 8318.031 - 8368.443: 67.2973% ( 97) 00:07:20.775 8368.443 - 8418.855: 67.8913% ( 92) 00:07:20.775 8418.855 - 8469.268: 68.4982% ( 94) 00:07:20.775 8469.268 - 8519.680: 69.1439% ( 100) 00:07:20.775 8519.680 - 8570.092: 69.8024% ( 102) 00:07:20.775 8570.092 - 8620.505: 70.5127% ( 110) 00:07:20.775 8620.505 - 8670.917: 71.1196% ( 94) 00:07:20.775 8670.917 - 8721.329: 71.7588% ( 99) 00:07:20.775 8721.329 - 8771.742: 72.4303% ( 104) 00:07:20.775 8771.742 - 8822.154: 73.1018% ( 104) 00:07:20.775 8822.154 - 8872.566: 73.6829% ( 90) 00:07:20.775 8872.566 - 8922.978: 74.3091% ( 97) 00:07:20.775 8922.978 - 8973.391: 74.9161% ( 94) 00:07:20.775 8973.391 - 9023.803: 75.5230% ( 94) 00:07:20.775 9023.803 - 9074.215: 76.1299% ( 94) 00:07:20.775 9074.215 - 9124.628: 76.7756% ( 100) 00:07:20.775 9124.628 - 9175.040: 77.4277% ( 101) 00:07:20.775 9175.040 - 9225.452: 78.0475% ( 96) 00:07:20.775 9225.452 - 9275.865: 78.6996% ( 101) 00:07:20.775 9275.865 - 9326.277: 79.3195% ( 96) 00:07:20.775 9326.277 - 9376.689: 79.8877% ( 88) 00:07:20.775 9376.689 - 9427.102: 80.3784% ( 76) 00:07:20.775 9427.102 - 9477.514: 80.8691% ( 76) 00:07:20.775 9477.514 - 9527.926: 81.3856% ( 80) 00:07:20.775 9527.926 - 9578.338: 81.8892% ( 78) 00:07:20.775 9578.338 - 9628.751: 82.3541% ( 72) 00:07:20.775 9628.751 - 9679.163: 82.7867% ( 67) 00:07:20.775 9679.163 - 9729.575: 83.2515% ( 72) 00:07:20.775 9729.575 - 9779.988: 83.6325% ( 59) 00:07:20.775 9779.988 - 9830.400: 84.0005% ( 57) 00:07:20.775 9830.400 - 9880.812: 84.3363% ( 52) 00:07:20.775 9880.812 - 9931.225: 84.6397% ( 47) 00:07:20.775 9931.225 - 9981.637: 84.9755% ( 52) 00:07:20.775 9981.637 - 10032.049: 85.3435% ( 57) 00:07:20.775 10032.049 - 10082.462: 85.6986% ( 55) 00:07:20.775 10082.462 - 10132.874: 86.0473% ( 54) 00:07:20.775 10132.874 - 10183.286: 86.4153% ( 57) 00:07:20.775 10183.286 - 10233.698: 86.7898% ( 58) 00:07:20.775 10233.698 - 10284.111: 87.1901% ( 62) 00:07:20.775 10284.111 - 10334.523: 87.6033% ( 64) 00:07:20.775 10334.523 - 10384.935: 88.0165% ( 64) 00:07:20.775 10384.935 - 10435.348: 88.3523% ( 52) 00:07:20.775 10435.348 - 10485.760: 88.6686% ( 49) 00:07:20.775 10485.760 - 10536.172: 89.0173% ( 54) 00:07:20.775 10536.172 - 10586.585: 89.3337% ( 49) 00:07:20.775 10586.585 - 10636.997: 89.6242% ( 45) 00:07:20.775 10636.997 - 10687.409: 89.9793% ( 55) 00:07:20.775 10687.409 - 10737.822: 90.3603% ( 59) 00:07:20.775 10737.822 - 10788.234: 90.6767% ( 49) 00:07:20.775 10788.234 - 10838.646: 90.9866% ( 48) 00:07:20.775 10838.646 - 10889.058: 91.2900% ( 47) 00:07:20.775 10889.058 - 10939.471: 91.5548% ( 41) 00:07:20.775 10939.471 - 10989.883: 91.8066% ( 39) 00:07:20.775 10989.883 - 11040.295: 92.0777% ( 42) 00:07:20.775 11040.295 - 11090.708: 92.3037% ( 35) 00:07:20.775 11090.708 - 11141.120: 92.5362% ( 36) 00:07:20.775 11141.120 - 11191.532: 92.7299% ( 30) 00:07:20.775 11191.532 - 11241.945: 92.9042% ( 27) 00:07:20.775 11241.945 - 11292.357: 93.0656% ( 25) 00:07:20.775 11292.357 - 11342.769: 93.2851% ( 34) 00:07:20.775 11342.769 - 11393.182: 93.4595% ( 27) 00:07:20.775 11393.182 - 11443.594: 93.6273% ( 26) 00:07:20.775 11443.594 - 11494.006: 93.7823% ( 24) 00:07:20.775 11494.006 - 11544.418: 93.9308% ( 23) 00:07:20.775 11544.418 - 11594.831: 94.1051% ( 27) 00:07:20.775 11594.831 - 11645.243: 94.2794% ( 27) 00:07:20.775 11645.243 - 11695.655: 94.4279% ( 23) 00:07:20.775 11695.655 - 11746.068: 94.5442% ( 18) 00:07:20.775 11746.068 - 11796.480: 94.6410% ( 15) 00:07:20.775 11796.480 - 11846.892: 94.7443% ( 16) 00:07:20.775 11846.892 - 11897.305: 94.8476% ( 16) 00:07:20.775 11897.305 - 11947.717: 94.9638% ( 18) 00:07:20.775 11947.717 - 11998.129: 95.0542% ( 14) 00:07:20.775 11998.129 - 12048.542: 95.1382% ( 13) 00:07:20.775 12048.542 - 12098.954: 95.2157% ( 12) 00:07:20.775 12098.954 - 12149.366: 95.2867% ( 11) 00:07:20.775 12149.366 - 12199.778: 95.3577% ( 11) 00:07:20.775 12199.778 - 12250.191: 95.4287% ( 11) 00:07:20.775 12250.191 - 12300.603: 95.4868% ( 9) 00:07:20.775 12300.603 - 12351.015: 95.5643% ( 12) 00:07:20.776 12351.015 - 12401.428: 95.6418% ( 12) 00:07:20.776 12401.428 - 12451.840: 95.7257% ( 13) 00:07:20.776 12451.840 - 12502.252: 95.8032% ( 12) 00:07:20.776 12502.252 - 12552.665: 95.9065% ( 16) 00:07:20.776 12552.665 - 12603.077: 96.0034% ( 15) 00:07:20.776 12603.077 - 12653.489: 96.1131% ( 17) 00:07:20.776 12653.489 - 12703.902: 96.2358% ( 19) 00:07:20.776 12703.902 - 12754.314: 96.3456% ( 17) 00:07:20.776 12754.314 - 12804.726: 96.4682% ( 19) 00:07:20.776 12804.726 - 12855.138: 96.6038% ( 21) 00:07:20.776 12855.138 - 12905.551: 96.7459% ( 22) 00:07:20.776 12905.551 - 13006.375: 97.0041% ( 40) 00:07:20.776 13006.375 - 13107.200: 97.2624% ( 40) 00:07:20.776 13107.200 - 13208.025: 97.5400% ( 43) 00:07:20.776 13208.025 - 13308.849: 97.8112% ( 42) 00:07:20.776 13308.849 - 13409.674: 98.0243% ( 33) 00:07:20.776 13409.674 - 13510.498: 98.1986% ( 27) 00:07:20.776 13510.498 - 13611.323: 98.3213% ( 19) 00:07:20.776 13611.323 - 13712.148: 98.4569% ( 21) 00:07:20.776 13712.148 - 13812.972: 98.5537% ( 15) 00:07:20.776 13812.972 - 13913.797: 98.6441% ( 14) 00:07:20.776 13913.797 - 14014.622: 98.7280% ( 13) 00:07:20.776 14014.622 - 14115.446: 98.7603% ( 5) 00:07:20.776 14216.271 - 14317.095: 98.7732% ( 2) 00:07:20.776 14317.095 - 14417.920: 98.8249% ( 8) 00:07:20.776 14417.920 - 14518.745: 98.8572% ( 5) 00:07:20.776 14518.745 - 14619.569: 98.8959% ( 6) 00:07:20.776 14619.569 - 14720.394: 98.9411% ( 7) 00:07:20.776 14720.394 - 14821.218: 98.9799% ( 6) 00:07:20.776 14821.218 - 14922.043: 99.0186% ( 6) 00:07:20.776 14922.043 - 15022.868: 99.0638% ( 7) 00:07:20.776 15022.868 - 15123.692: 99.1025% ( 6) 00:07:20.776 15123.692 - 15224.517: 99.1413% ( 6) 00:07:20.776 15224.517 - 15325.342: 99.1736% ( 5) 00:07:20.776 24097.083 - 24197.908: 99.1800% ( 1) 00:07:20.776 24197.908 - 24298.732: 99.2058% ( 4) 00:07:20.776 24298.732 - 24399.557: 99.2252% ( 3) 00:07:20.776 24399.557 - 24500.382: 99.2510% ( 4) 00:07:20.776 24500.382 - 24601.206: 99.2833% ( 5) 00:07:20.776 24601.206 - 24702.031: 99.3091% ( 4) 00:07:20.776 24702.031 - 24802.855: 99.3285% ( 3) 00:07:20.776 24802.855 - 24903.680: 99.3543% ( 4) 00:07:20.776 24903.680 - 25004.505: 99.3802% ( 4) 00:07:20.776 25004.505 - 25105.329: 99.4124% ( 5) 00:07:20.776 25105.329 - 25206.154: 99.4383% ( 4) 00:07:20.776 25206.154 - 25306.978: 99.4641% ( 4) 00:07:20.776 25306.978 - 25407.803: 99.4899% ( 4) 00:07:20.776 25407.803 - 25508.628: 99.5158% ( 4) 00:07:20.776 25508.628 - 25609.452: 99.5416% ( 4) 00:07:20.776 25609.452 - 25710.277: 99.5739% ( 5) 00:07:20.776 25710.277 - 25811.102: 99.5868% ( 2) 00:07:20.776 28835.840 - 29037.489: 99.6255% ( 6) 00:07:20.776 29037.489 - 29239.138: 99.6772% ( 8) 00:07:20.776 29239.138 - 29440.788: 99.7288% ( 8) 00:07:20.776 29440.788 - 29642.437: 99.7869% ( 9) 00:07:20.776 29642.437 - 29844.086: 99.8386% ( 8) 00:07:20.776 29844.086 - 30045.735: 99.8838% ( 7) 00:07:20.776 30045.735 - 30247.385: 99.9419% ( 9) 00:07:20.776 30247.385 - 30449.034: 100.0000% ( 9) 00:07:20.776 00:07:20.776 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:20.776 ============================================================================== 00:07:20.776 Range in us Cumulative IO count 00:07:20.776 5772.209 - 5797.415: 0.0129% ( 2) 00:07:20.776 5797.415 - 5822.622: 0.0710% ( 9) 00:07:20.776 5822.622 - 5847.828: 0.1550% ( 13) 00:07:20.776 5847.828 - 5873.034: 0.2260% ( 11) 00:07:20.776 5873.034 - 5898.240: 0.3422% ( 18) 00:07:20.776 5898.240 - 5923.446: 0.5488% ( 32) 00:07:20.776 5923.446 - 5948.652: 0.7619% ( 33) 00:07:20.776 5948.652 - 5973.858: 0.9168% ( 24) 00:07:20.776 5973.858 - 5999.065: 1.1041% ( 29) 00:07:20.776 5999.065 - 6024.271: 1.3107% ( 32) 00:07:20.776 6024.271 - 6049.477: 1.5819% ( 42) 00:07:20.776 6049.477 - 6074.683: 1.8143% ( 36) 00:07:20.776 6074.683 - 6099.889: 2.0209% ( 32) 00:07:20.776 6099.889 - 6125.095: 2.3244% ( 47) 00:07:20.776 6125.095 - 6150.302: 2.6085% ( 44) 00:07:20.776 6150.302 - 6175.508: 2.8926% ( 44) 00:07:20.776 6175.508 - 6200.714: 3.2089% ( 49) 00:07:20.776 6200.714 - 6225.920: 3.5059% ( 46) 00:07:20.776 6225.920 - 6251.126: 3.8546% ( 54) 00:07:20.776 6251.126 - 6276.332: 4.2226% ( 57) 00:07:20.776 6276.332 - 6301.538: 4.6552% ( 67) 00:07:20.776 6301.538 - 6326.745: 5.1459% ( 76) 00:07:20.776 6326.745 - 6351.951: 5.6237% ( 74) 00:07:20.776 6351.951 - 6377.157: 6.1532% ( 82) 00:07:20.776 6377.157 - 6402.363: 6.7472% ( 92) 00:07:20.776 6402.363 - 6427.569: 7.4057% ( 102) 00:07:20.776 6427.569 - 6452.775: 8.0772% ( 104) 00:07:20.776 6452.775 - 6503.188: 9.4331% ( 210) 00:07:20.776 6503.188 - 6553.600: 11.0150% ( 245) 00:07:20.776 6553.600 - 6604.012: 12.6098% ( 247) 00:07:20.776 6604.012 - 6654.425: 14.3466% ( 269) 00:07:20.776 6654.425 - 6704.837: 16.1803% ( 284) 00:07:20.776 6704.837 - 6755.249: 18.0139% ( 284) 00:07:20.776 6755.249 - 6805.662: 19.9574% ( 301) 00:07:20.776 6805.662 - 6856.074: 21.9137% ( 303) 00:07:20.776 6856.074 - 6906.486: 23.9282% ( 312) 00:07:20.776 6906.486 - 6956.898: 25.9749% ( 317) 00:07:20.776 6956.898 - 7007.311: 28.0346% ( 319) 00:07:20.776 7007.311 - 7057.723: 30.1136% ( 322) 00:07:20.776 7057.723 - 7108.135: 32.3928% ( 353) 00:07:20.776 7108.135 - 7158.548: 34.5493% ( 334) 00:07:20.776 7158.548 - 7208.960: 36.6477% ( 325) 00:07:20.776 7208.960 - 7259.372: 38.7461% ( 325) 00:07:20.776 7259.372 - 7309.785: 40.9026% ( 334) 00:07:20.776 7309.785 - 7360.197: 42.9494% ( 317) 00:07:20.776 7360.197 - 7410.609: 44.9574% ( 311) 00:07:20.776 7410.609 - 7461.022: 46.8040% ( 286) 00:07:20.776 7461.022 - 7511.434: 48.5537% ( 271) 00:07:20.776 7511.434 - 7561.846: 50.2066% ( 256) 00:07:20.776 7561.846 - 7612.258: 51.7756% ( 243) 00:07:20.776 7612.258 - 7662.671: 53.2670% ( 231) 00:07:20.776 7662.671 - 7713.083: 54.6488% ( 214) 00:07:20.776 7713.083 - 7763.495: 56.0434% ( 216) 00:07:20.776 7763.495 - 7813.908: 57.3928% ( 209) 00:07:20.776 7813.908 - 7864.320: 58.7616% ( 212) 00:07:20.776 7864.320 - 7914.732: 60.0336% ( 197) 00:07:20.776 7914.732 - 7965.145: 61.1699% ( 176) 00:07:20.776 7965.145 - 8015.557: 62.1965% ( 159) 00:07:20.776 8015.557 - 8065.969: 63.1327% ( 145) 00:07:20.776 8065.969 - 8116.382: 64.0560% ( 143) 00:07:20.776 8116.382 - 8166.794: 64.8115% ( 117) 00:07:20.776 8166.794 - 8217.206: 65.5023% ( 107) 00:07:20.776 8217.206 - 8267.618: 66.1738% ( 104) 00:07:20.776 8267.618 - 8318.031: 66.7807% ( 94) 00:07:20.776 8318.031 - 8368.443: 67.4006% ( 96) 00:07:20.776 8368.443 - 8418.855: 68.0333% ( 98) 00:07:20.776 8418.855 - 8469.268: 68.6402% ( 94) 00:07:20.776 8469.268 - 8519.680: 69.2988% ( 102) 00:07:20.776 8519.680 - 8570.092: 69.9832% ( 106) 00:07:20.776 8570.092 - 8620.505: 70.6676% ( 106) 00:07:20.776 8620.505 - 8670.917: 71.3133% ( 100) 00:07:20.776 8670.917 - 8721.329: 72.0235% ( 110) 00:07:20.776 8721.329 - 8771.742: 72.6240% ( 93) 00:07:20.776 8771.742 - 8822.154: 73.2567% ( 98) 00:07:20.776 8822.154 - 8872.566: 73.8959% ( 99) 00:07:20.776 8872.566 - 8922.978: 74.5997% ( 109) 00:07:20.776 8922.978 - 8973.391: 75.2647% ( 103) 00:07:20.776 8973.391 - 9023.803: 75.9233% ( 102) 00:07:20.776 9023.803 - 9074.215: 76.5948% ( 104) 00:07:20.776 9074.215 - 9124.628: 77.2856% ( 107) 00:07:20.776 9124.628 - 9175.040: 78.0152% ( 113) 00:07:20.776 9175.040 - 9225.452: 78.7577% ( 115) 00:07:20.776 9225.452 - 9275.865: 79.4357% ( 105) 00:07:20.776 9275.865 - 9326.277: 80.0878% ( 101) 00:07:20.776 9326.277 - 9376.689: 80.6495% ( 87) 00:07:20.776 9376.689 - 9427.102: 81.1467% ( 77) 00:07:20.776 9427.102 - 9477.514: 81.6374% ( 76) 00:07:20.776 9477.514 - 9527.926: 82.0958% ( 71) 00:07:20.776 9527.926 - 9578.338: 82.5736% ( 74) 00:07:20.776 9578.338 - 9628.751: 83.0062% ( 67) 00:07:20.776 9628.751 - 9679.163: 83.4065% ( 62) 00:07:20.776 9679.163 - 9729.575: 83.7745% ( 57) 00:07:20.776 9729.575 - 9779.988: 84.1426% ( 57) 00:07:20.776 9779.988 - 9830.400: 84.5170% ( 58) 00:07:20.776 9830.400 - 9880.812: 84.8399% ( 50) 00:07:20.776 9880.812 - 9931.225: 85.1304% ( 45) 00:07:20.776 9931.225 - 9981.637: 85.4339% ( 47) 00:07:20.776 9981.637 - 10032.049: 85.7051% ( 42) 00:07:20.776 10032.049 - 10082.462: 85.9956% ( 45) 00:07:20.776 10082.462 - 10132.874: 86.2991% ( 47) 00:07:20.776 10132.874 - 10183.286: 86.6348% ( 52) 00:07:20.776 10183.286 - 10233.698: 86.9512% ( 49) 00:07:20.776 10233.698 - 10284.111: 87.2740% ( 50) 00:07:20.776 10284.111 - 10334.523: 87.6420% ( 57) 00:07:20.776 10334.523 - 10384.935: 88.0101% ( 57) 00:07:20.776 10384.935 - 10435.348: 88.3781% ( 57) 00:07:20.776 10435.348 - 10485.760: 88.6686% ( 45) 00:07:20.776 10485.760 - 10536.172: 88.9850% ( 49) 00:07:20.776 10536.172 - 10586.585: 89.2949% ( 48) 00:07:20.776 10586.585 - 10636.997: 89.5790% ( 44) 00:07:20.776 10636.997 - 10687.409: 89.8631% ( 44) 00:07:20.776 10687.409 - 10737.822: 90.1343% ( 42) 00:07:20.777 10737.822 - 10788.234: 90.4378% ( 47) 00:07:20.777 10788.234 - 10838.646: 90.7606% ( 50) 00:07:20.777 10838.646 - 10889.058: 91.0447% ( 44) 00:07:20.777 10889.058 - 10939.471: 91.3029% ( 40) 00:07:20.777 10939.471 - 10989.883: 91.5548% ( 39) 00:07:20.777 10989.883 - 11040.295: 91.7678% ( 33) 00:07:20.777 11040.295 - 11090.708: 91.9873% ( 34) 00:07:20.777 11090.708 - 11141.120: 92.1875% ( 31) 00:07:20.777 11141.120 - 11191.532: 92.4135% ( 35) 00:07:20.777 11191.532 - 11241.945: 92.5943% ( 28) 00:07:20.777 11241.945 - 11292.357: 92.7880% ( 30) 00:07:20.777 11292.357 - 11342.769: 92.9365% ( 23) 00:07:20.777 11342.769 - 11393.182: 93.0591% ( 19) 00:07:20.777 11393.182 - 11443.594: 93.2012% ( 22) 00:07:20.777 11443.594 - 11494.006: 93.3561% ( 24) 00:07:20.777 11494.006 - 11544.418: 93.5369% ( 28) 00:07:20.777 11544.418 - 11594.831: 93.6983% ( 25) 00:07:20.777 11594.831 - 11645.243: 93.8662% ( 26) 00:07:20.777 11645.243 - 11695.655: 94.0083% ( 22) 00:07:20.777 11695.655 - 11746.068: 94.2020% ( 30) 00:07:20.777 11746.068 - 11796.480: 94.3957% ( 30) 00:07:20.777 11796.480 - 11846.892: 94.5894% ( 30) 00:07:20.777 11846.892 - 11897.305: 94.7766% ( 29) 00:07:20.777 11897.305 - 11947.717: 94.9768% ( 31) 00:07:20.777 11947.717 - 11998.129: 95.1705% ( 30) 00:07:20.777 11998.129 - 12048.542: 95.3577% ( 29) 00:07:20.777 12048.542 - 12098.954: 95.5256% ( 26) 00:07:20.777 12098.954 - 12149.366: 95.6805% ( 24) 00:07:20.777 12149.366 - 12199.778: 95.8032% ( 19) 00:07:20.777 12199.778 - 12250.191: 95.9323% ( 20) 00:07:20.777 12250.191 - 12300.603: 96.0615% ( 20) 00:07:20.777 12300.603 - 12351.015: 96.1583% ( 15) 00:07:20.777 12351.015 - 12401.428: 96.2293% ( 11) 00:07:20.777 12401.428 - 12451.840: 96.3068% ( 12) 00:07:20.777 12451.840 - 12502.252: 96.3972% ( 14) 00:07:20.777 12502.252 - 12552.665: 96.4682% ( 11) 00:07:20.777 12552.665 - 12603.077: 96.5715% ( 16) 00:07:20.777 12603.077 - 12653.489: 96.6619% ( 14) 00:07:20.777 12653.489 - 12703.902: 96.7394% ( 12) 00:07:20.777 12703.902 - 12754.314: 96.7975% ( 9) 00:07:20.777 12754.314 - 12804.726: 96.8556% ( 9) 00:07:20.777 12804.726 - 12855.138: 96.9202% ( 10) 00:07:20.777 12855.138 - 12905.551: 96.9783% ( 9) 00:07:20.777 12905.551 - 13006.375: 97.1074% ( 20) 00:07:20.777 13006.375 - 13107.200: 97.3011% ( 30) 00:07:20.777 13107.200 - 13208.025: 97.5013% ( 31) 00:07:20.777 13208.025 - 13308.849: 97.6756% ( 27) 00:07:20.777 13308.849 - 13409.674: 97.8822% ( 32) 00:07:20.777 13409.674 - 13510.498: 98.0566% ( 27) 00:07:20.777 13510.498 - 13611.323: 98.1921% ( 21) 00:07:20.777 13611.323 - 13712.148: 98.3213% ( 20) 00:07:20.777 13712.148 - 13812.972: 98.4440% ( 19) 00:07:20.777 13812.972 - 13913.797: 98.5795% ( 21) 00:07:20.777 13913.797 - 14014.622: 98.6635% ( 13) 00:07:20.777 14014.622 - 14115.446: 98.7022% ( 6) 00:07:20.777 14115.446 - 14216.271: 98.7539% ( 8) 00:07:20.777 14216.271 - 14317.095: 98.8184% ( 10) 00:07:20.777 14317.095 - 14417.920: 98.8572% ( 6) 00:07:20.777 14417.920 - 14518.745: 98.8959% ( 6) 00:07:20.777 14518.745 - 14619.569: 98.9347% ( 6) 00:07:20.777 14619.569 - 14720.394: 98.9799% ( 7) 00:07:20.777 14720.394 - 14821.218: 99.0186% ( 6) 00:07:20.777 14821.218 - 14922.043: 99.0573% ( 6) 00:07:20.777 14922.043 - 15022.868: 99.0961% ( 6) 00:07:20.777 15022.868 - 15123.692: 99.1348% ( 6) 00:07:20.777 15123.692 - 15224.517: 99.1736% ( 6) 00:07:20.777 22282.240 - 22383.065: 99.1800% ( 1) 00:07:20.777 22383.065 - 22483.889: 99.1994% ( 3) 00:07:20.777 22483.889 - 22584.714: 99.2188% ( 3) 00:07:20.777 22584.714 - 22685.538: 99.2446% ( 4) 00:07:20.777 22685.538 - 22786.363: 99.2704% ( 4) 00:07:20.777 22786.363 - 22887.188: 99.2962% ( 4) 00:07:20.777 22887.188 - 22988.012: 99.3285% ( 5) 00:07:20.777 22988.012 - 23088.837: 99.3543% ( 4) 00:07:20.777 23088.837 - 23189.662: 99.3802% ( 4) 00:07:20.777 23189.662 - 23290.486: 99.4060% ( 4) 00:07:20.777 23290.486 - 23391.311: 99.4383% ( 5) 00:07:20.777 23391.311 - 23492.135: 99.4641% ( 4) 00:07:20.777 23492.135 - 23592.960: 99.4899% ( 4) 00:07:20.777 23592.960 - 23693.785: 99.5158% ( 4) 00:07:20.777 23693.785 - 23794.609: 99.5416% ( 4) 00:07:20.777 23794.609 - 23895.434: 99.5674% ( 4) 00:07:20.777 23895.434 - 23996.258: 99.5868% ( 3) 00:07:20.777 27222.646 - 27424.295: 99.6191% ( 5) 00:07:20.777 27424.295 - 27625.945: 99.6772% ( 9) 00:07:20.777 27625.945 - 27827.594: 99.7288% ( 8) 00:07:20.777 27827.594 - 28029.243: 99.7869% ( 9) 00:07:20.777 28029.243 - 28230.892: 99.8386% ( 8) 00:07:20.777 28230.892 - 28432.542: 99.8902% ( 8) 00:07:20.777 28432.542 - 28634.191: 99.9483% ( 9) 00:07:20.777 28634.191 - 28835.840: 100.0000% ( 8) 00:07:20.777 00:07:20.777 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:20.777 ============================================================================== 00:07:20.777 Range in us Cumulative IO count 00:07:20.777 5772.209 - 5797.415: 0.0065% ( 1) 00:07:20.777 5797.415 - 5822.622: 0.0258% ( 3) 00:07:20.777 5822.622 - 5847.828: 0.0646% ( 6) 00:07:20.777 5847.828 - 5873.034: 0.1808% ( 18) 00:07:20.777 5873.034 - 5898.240: 0.2970% ( 18) 00:07:20.777 5898.240 - 5923.446: 0.4455% ( 23) 00:07:20.777 5923.446 - 5948.652: 0.6198% ( 27) 00:07:20.777 5948.652 - 5973.858: 0.8006% ( 28) 00:07:20.777 5973.858 - 5999.065: 0.9943% ( 30) 00:07:20.777 5999.065 - 6024.271: 1.1751% ( 28) 00:07:20.777 6024.271 - 6049.477: 1.3688% ( 30) 00:07:20.777 6049.477 - 6074.683: 1.5690% ( 31) 00:07:20.777 6074.683 - 6099.889: 1.7627% ( 30) 00:07:20.777 6099.889 - 6125.095: 2.0145% ( 39) 00:07:20.777 6125.095 - 6150.302: 2.2921% ( 43) 00:07:20.777 6150.302 - 6175.508: 2.6020% ( 48) 00:07:20.777 6175.508 - 6200.714: 2.9700% ( 57) 00:07:20.777 6200.714 - 6225.920: 3.3252% ( 55) 00:07:20.777 6225.920 - 6251.126: 3.7319% ( 63) 00:07:20.777 6251.126 - 6276.332: 4.2097% ( 74) 00:07:20.777 6276.332 - 6301.538: 4.6681% ( 71) 00:07:20.777 6301.538 - 6326.745: 5.1007% ( 67) 00:07:20.777 6326.745 - 6351.951: 5.6818% ( 90) 00:07:20.777 6351.951 - 6377.157: 6.2435% ( 87) 00:07:20.777 6377.157 - 6402.363: 6.8763% ( 98) 00:07:20.777 6402.363 - 6427.569: 7.5090% ( 98) 00:07:20.777 6427.569 - 6452.775: 8.1741% ( 103) 00:07:20.777 6452.775 - 6503.188: 9.5170% ( 208) 00:07:20.777 6503.188 - 6553.600: 10.9569% ( 223) 00:07:20.777 6553.600 - 6604.012: 12.5129% ( 241) 00:07:20.777 6604.012 - 6654.425: 14.2497% ( 269) 00:07:20.777 6654.425 - 6704.837: 15.9737% ( 267) 00:07:20.777 6704.837 - 6755.249: 17.7944% ( 282) 00:07:20.777 6755.249 - 6805.662: 19.6023% ( 280) 00:07:20.777 6805.662 - 6856.074: 21.6942% ( 324) 00:07:20.777 6856.074 - 6906.486: 23.6764% ( 307) 00:07:20.777 6906.486 - 6956.898: 25.7231% ( 317) 00:07:20.777 6956.898 - 7007.311: 27.8151% ( 324) 00:07:20.777 7007.311 - 7057.723: 29.9651% ( 333) 00:07:20.777 7057.723 - 7108.135: 32.1023% ( 331) 00:07:20.777 7108.135 - 7158.548: 34.2459% ( 332) 00:07:20.777 7158.548 - 7208.960: 36.2410% ( 309) 00:07:20.777 7208.960 - 7259.372: 38.2167% ( 306) 00:07:20.777 7259.372 - 7309.785: 40.1666% ( 302) 00:07:20.777 7309.785 - 7360.197: 42.1617% ( 309) 00:07:20.777 7360.197 - 7410.609: 44.0987% ( 300) 00:07:20.777 7410.609 - 7461.022: 46.0938% ( 309) 00:07:20.777 7461.022 - 7511.434: 47.8822% ( 277) 00:07:20.777 7511.434 - 7561.846: 49.6449% ( 273) 00:07:20.777 7561.846 - 7612.258: 51.2913% ( 255) 00:07:20.777 7612.258 - 7662.671: 52.8732% ( 245) 00:07:20.778 7662.671 - 7713.083: 54.2872% ( 219) 00:07:20.778 7713.083 - 7763.495: 55.6495% ( 211) 00:07:20.778 7763.495 - 7813.908: 56.9990% ( 209) 00:07:20.778 7813.908 - 7864.320: 58.3290% ( 206) 00:07:20.778 7864.320 - 7914.732: 59.5300% ( 186) 00:07:20.778 7914.732 - 7965.145: 60.6534% ( 174) 00:07:20.778 7965.145 - 8015.557: 61.8091% ( 179) 00:07:20.778 8015.557 - 8065.969: 62.7970% ( 153) 00:07:20.778 8065.969 - 8116.382: 63.8042% ( 156) 00:07:20.778 8116.382 - 8166.794: 64.6823% ( 136) 00:07:20.778 8166.794 - 8217.206: 65.4959% ( 126) 00:07:20.778 8217.206 - 8267.618: 66.3094% ( 126) 00:07:20.778 8267.618 - 8318.031: 67.0325% ( 112) 00:07:20.778 8318.031 - 8368.443: 67.6653% ( 98) 00:07:20.778 8368.443 - 8418.855: 68.2916% ( 97) 00:07:20.778 8418.855 - 8469.268: 68.9437% ( 101) 00:07:20.778 8469.268 - 8519.680: 69.5958% ( 101) 00:07:20.778 8519.680 - 8570.092: 70.3254% ( 113) 00:07:20.778 8570.092 - 8620.505: 70.9646% ( 99) 00:07:20.778 8620.505 - 8670.917: 71.6038% ( 99) 00:07:20.778 8670.917 - 8721.329: 72.3205% ( 111) 00:07:20.778 8721.329 - 8771.742: 72.9726% ( 101) 00:07:20.778 8771.742 - 8822.154: 73.6118% ( 99) 00:07:20.778 8822.154 - 8872.566: 74.2317% ( 96) 00:07:20.778 8872.566 - 8922.978: 74.8515% ( 96) 00:07:20.778 8922.978 - 8973.391: 75.4520% ( 93) 00:07:20.778 8973.391 - 9023.803: 76.0331% ( 90) 00:07:20.778 9023.803 - 9074.215: 76.7045% ( 104) 00:07:20.778 9074.215 - 9124.628: 77.3502% ( 100) 00:07:20.778 9124.628 - 9175.040: 77.9700% ( 96) 00:07:20.778 9175.040 - 9225.452: 78.6157% ( 100) 00:07:20.778 9225.452 - 9275.865: 79.2097% ( 92) 00:07:20.778 9275.865 - 9326.277: 79.7585% ( 85) 00:07:20.778 9326.277 - 9376.689: 80.2299% ( 73) 00:07:20.778 9376.689 - 9427.102: 80.7787% ( 85) 00:07:20.778 9427.102 - 9477.514: 81.3468% ( 88) 00:07:20.778 9477.514 - 9527.926: 81.8827% ( 83) 00:07:20.778 9527.926 - 9578.338: 82.3864% ( 78) 00:07:20.778 9578.338 - 9628.751: 82.8964% ( 79) 00:07:20.778 9628.751 - 9679.163: 83.3613% ( 72) 00:07:20.778 9679.163 - 9729.575: 83.8068% ( 69) 00:07:20.778 9729.575 - 9779.988: 84.2330% ( 66) 00:07:20.778 9779.988 - 9830.400: 84.6978% ( 72) 00:07:20.778 9830.400 - 9880.812: 85.1756% ( 74) 00:07:20.778 9880.812 - 9931.225: 85.5824% ( 63) 00:07:20.778 9931.225 - 9981.637: 85.9504% ( 57) 00:07:20.778 9981.637 - 10032.049: 86.3120% ( 56) 00:07:20.778 10032.049 - 10082.462: 86.6994% ( 60) 00:07:20.778 10082.462 - 10132.874: 87.0093% ( 48) 00:07:20.778 10132.874 - 10183.286: 87.3902% ( 59) 00:07:20.778 10183.286 - 10233.698: 87.7131% ( 50) 00:07:20.778 10233.698 - 10284.111: 88.0682% ( 55) 00:07:20.778 10284.111 - 10334.523: 88.3329% ( 41) 00:07:20.778 10334.523 - 10384.935: 88.6299% ( 46) 00:07:20.778 10384.935 - 10435.348: 88.9463% ( 49) 00:07:20.778 10435.348 - 10485.760: 89.2045% ( 40) 00:07:20.778 10485.760 - 10536.172: 89.4757% ( 42) 00:07:20.778 10536.172 - 10586.585: 89.6952% ( 34) 00:07:20.778 10586.585 - 10636.997: 89.9406% ( 38) 00:07:20.778 10636.997 - 10687.409: 90.2053% ( 41) 00:07:20.778 10687.409 - 10737.822: 90.4636% ( 40) 00:07:20.778 10737.822 - 10788.234: 90.7154% ( 39) 00:07:20.778 10788.234 - 10838.646: 90.9543% ( 37) 00:07:20.778 10838.646 - 10889.058: 91.1738% ( 34) 00:07:20.778 10889.058 - 10939.471: 91.3933% ( 34) 00:07:20.778 10939.471 - 10989.883: 91.6064% ( 33) 00:07:20.778 10989.883 - 11040.295: 91.8001% ( 30) 00:07:20.778 11040.295 - 11090.708: 91.9615% ( 25) 00:07:20.778 11090.708 - 11141.120: 92.1423% ( 28) 00:07:20.778 11141.120 - 11191.532: 92.2843% ( 22) 00:07:20.778 11191.532 - 11241.945: 92.4070% ( 19) 00:07:20.778 11241.945 - 11292.357: 92.5426% ( 21) 00:07:20.778 11292.357 - 11342.769: 92.6653% ( 19) 00:07:20.778 11342.769 - 11393.182: 92.8138% ( 23) 00:07:20.778 11393.182 - 11443.594: 92.9881% ( 27) 00:07:20.778 11443.594 - 11494.006: 93.1495% ( 25) 00:07:20.778 11494.006 - 11544.418: 93.2980% ( 23) 00:07:20.778 11544.418 - 11594.831: 93.4595% ( 25) 00:07:20.778 11594.831 - 11645.243: 93.6467% ( 29) 00:07:20.778 11645.243 - 11695.655: 93.8468% ( 31) 00:07:20.778 11695.655 - 11746.068: 94.0276% ( 28) 00:07:20.778 11746.068 - 11796.480: 94.2278% ( 31) 00:07:20.778 11796.480 - 11846.892: 94.4215% ( 30) 00:07:20.778 11846.892 - 11897.305: 94.6087% ( 29) 00:07:20.778 11897.305 - 11947.717: 94.8153% ( 32) 00:07:20.778 11947.717 - 11998.129: 95.0413% ( 35) 00:07:20.778 11998.129 - 12048.542: 95.2479% ( 32) 00:07:20.778 12048.542 - 12098.954: 95.4481% ( 31) 00:07:20.778 12098.954 - 12149.366: 95.6547% ( 32) 00:07:20.778 12149.366 - 12199.778: 95.8807% ( 35) 00:07:20.778 12199.778 - 12250.191: 96.0808% ( 31) 00:07:20.778 12250.191 - 12300.603: 96.2423% ( 25) 00:07:20.778 12300.603 - 12351.015: 96.3972% ( 24) 00:07:20.778 12351.015 - 12401.428: 96.5457% ( 23) 00:07:20.778 12401.428 - 12451.840: 96.6748% ( 20) 00:07:20.778 12451.840 - 12502.252: 96.7911% ( 18) 00:07:20.778 12502.252 - 12552.665: 96.8815% ( 14) 00:07:20.778 12552.665 - 12603.077: 96.9460% ( 10) 00:07:20.778 12603.077 - 12653.489: 96.9783% ( 5) 00:07:20.778 12653.489 - 12703.902: 97.0235% ( 7) 00:07:20.778 12703.902 - 12754.314: 97.0816% ( 9) 00:07:20.778 12754.314 - 12804.726: 97.1462% ( 10) 00:07:20.778 12804.726 - 12855.138: 97.2172% ( 11) 00:07:20.778 12855.138 - 12905.551: 97.2624% ( 7) 00:07:20.778 12905.551 - 13006.375: 97.3528% ( 14) 00:07:20.778 13006.375 - 13107.200: 97.4367% ( 13) 00:07:20.778 13107.200 - 13208.025: 97.5271% ( 14) 00:07:20.778 13208.025 - 13308.849: 97.6498% ( 19) 00:07:20.778 13308.849 - 13409.674: 97.7725% ( 19) 00:07:20.778 13409.674 - 13510.498: 97.9081% ( 21) 00:07:20.778 13510.498 - 13611.323: 98.0307% ( 19) 00:07:20.778 13611.323 - 13712.148: 98.1405% ( 17) 00:07:20.778 13712.148 - 13812.972: 98.2051% ( 10) 00:07:20.778 13812.972 - 13913.797: 98.2825% ( 12) 00:07:20.778 13913.797 - 14014.622: 98.3729% ( 14) 00:07:20.778 14014.622 - 14115.446: 98.4956% ( 19) 00:07:20.778 14115.446 - 14216.271: 98.6054% ( 17) 00:07:20.778 14216.271 - 14317.095: 98.6958% ( 14) 00:07:20.778 14317.095 - 14417.920: 98.7797% ( 13) 00:07:20.778 14417.920 - 14518.745: 98.8636% ( 13) 00:07:20.778 14518.745 - 14619.569: 98.9411% ( 12) 00:07:20.778 14619.569 - 14720.394: 99.0315% ( 14) 00:07:20.778 14720.394 - 14821.218: 99.0832% ( 8) 00:07:20.778 14821.218 - 14922.043: 99.1284% ( 7) 00:07:20.778 14922.043 - 15022.868: 99.1671% ( 6) 00:07:20.778 15022.868 - 15123.692: 99.1736% ( 1) 00:07:20.778 20669.046 - 20769.871: 99.1929% ( 3) 00:07:20.778 20769.871 - 20870.695: 99.2252% ( 5) 00:07:20.778 20870.695 - 20971.520: 99.2510% ( 4) 00:07:20.778 20971.520 - 21072.345: 99.2769% ( 4) 00:07:20.778 21072.345 - 21173.169: 99.3027% ( 4) 00:07:20.778 21173.169 - 21273.994: 99.3285% ( 4) 00:07:20.778 21273.994 - 21374.818: 99.3543% ( 4) 00:07:20.778 21374.818 - 21475.643: 99.3802% ( 4) 00:07:20.778 21475.643 - 21576.468: 99.4124% ( 5) 00:07:20.778 21576.468 - 21677.292: 99.4383% ( 4) 00:07:20.778 21677.292 - 21778.117: 99.4641% ( 4) 00:07:20.778 21778.117 - 21878.942: 99.4899% ( 4) 00:07:20.778 21878.942 - 21979.766: 99.5158% ( 4) 00:07:20.778 21979.766 - 22080.591: 99.5416% ( 4) 00:07:20.778 22080.591 - 22181.415: 99.5739% ( 5) 00:07:20.778 22181.415 - 22282.240: 99.5868% ( 2) 00:07:20.778 25508.628 - 25609.452: 99.5932% ( 1) 00:07:20.778 25609.452 - 25710.277: 99.6191% ( 4) 00:07:20.778 25710.277 - 25811.102: 99.6449% ( 4) 00:07:20.778 25811.102 - 26012.751: 99.6965% ( 8) 00:07:20.778 26012.751 - 26214.400: 99.7546% ( 9) 00:07:20.778 26214.400 - 26416.049: 99.8063% ( 8) 00:07:20.778 26416.049 - 26617.698: 99.8580% ( 8) 00:07:20.778 26617.698 - 26819.348: 99.9161% ( 9) 00:07:20.778 26819.348 - 27020.997: 99.9677% ( 8) 00:07:20.778 27020.997 - 27222.646: 100.0000% ( 5) 00:07:20.778 00:07:20.778 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:20.778 ============================================================================== 00:07:20.778 Range in us Cumulative IO count 00:07:20.778 5772.209 - 5797.415: 0.0193% ( 3) 00:07:20.778 5797.415 - 5822.622: 0.0772% ( 9) 00:07:20.778 5822.622 - 5847.828: 0.1286% ( 8) 00:07:20.778 5847.828 - 5873.034: 0.1993% ( 11) 00:07:20.778 5873.034 - 5898.240: 0.3665% ( 26) 00:07:20.778 5898.240 - 5923.446: 0.5144% ( 23) 00:07:20.778 5923.446 - 5948.652: 0.6237% ( 17) 00:07:20.778 5948.652 - 5973.858: 0.7395% ( 18) 00:07:20.778 5973.858 - 5999.065: 0.8488% ( 17) 00:07:20.778 5999.065 - 6024.271: 1.0417% ( 30) 00:07:20.778 6024.271 - 6049.477: 1.2024% ( 25) 00:07:20.778 6049.477 - 6074.683: 1.4339% ( 36) 00:07:20.778 6074.683 - 6099.889: 1.6654% ( 36) 00:07:20.778 6099.889 - 6125.095: 1.9226% ( 40) 00:07:20.778 6125.095 - 6150.302: 2.2248% ( 47) 00:07:20.778 6150.302 - 6175.508: 2.5206% ( 46) 00:07:20.778 6175.508 - 6200.714: 2.8485% ( 51) 00:07:20.778 6200.714 - 6225.920: 3.2343% ( 60) 00:07:20.778 6225.920 - 6251.126: 3.6265% ( 61) 00:07:20.779 6251.126 - 6276.332: 4.0766% ( 70) 00:07:20.779 6276.332 - 6301.538: 4.5203% ( 69) 00:07:20.779 6301.538 - 6326.745: 5.0347% ( 80) 00:07:20.779 6326.745 - 6351.951: 5.6006% ( 88) 00:07:20.779 6351.951 - 6377.157: 6.1278% ( 82) 00:07:20.779 6377.157 - 6402.363: 6.6551% ( 82) 00:07:20.779 6402.363 - 6427.569: 7.2724% ( 96) 00:07:20.779 6427.569 - 6452.775: 7.9218% ( 101) 00:07:20.779 6452.775 - 6503.188: 9.2785% ( 211) 00:07:20.779 6503.188 - 6553.600: 10.8796% ( 249) 00:07:20.779 6553.600 - 6604.012: 12.4936% ( 251) 00:07:20.779 6604.012 - 6654.425: 14.0754% ( 246) 00:07:20.779 6654.425 - 6704.837: 15.8050% ( 269) 00:07:20.779 6704.837 - 6755.249: 17.6955% ( 294) 00:07:20.779 6755.249 - 6805.662: 19.6695% ( 307) 00:07:20.779 6805.662 - 6856.074: 21.7400% ( 322) 00:07:20.779 6856.074 - 6906.486: 23.8104% ( 322) 00:07:20.779 6906.486 - 6956.898: 25.8166% ( 312) 00:07:20.779 6956.898 - 7007.311: 27.9192% ( 327) 00:07:20.779 7007.311 - 7057.723: 30.0862% ( 337) 00:07:20.779 7057.723 - 7108.135: 32.1888% ( 327) 00:07:20.779 7108.135 - 7158.548: 34.2721% ( 324) 00:07:20.779 7158.548 - 7208.960: 36.2847% ( 313) 00:07:20.779 7208.960 - 7259.372: 38.2652% ( 308) 00:07:20.779 7259.372 - 7309.785: 40.4578% ( 341) 00:07:20.779 7309.785 - 7360.197: 42.5347% ( 323) 00:07:20.779 7360.197 - 7410.609: 44.3544% ( 283) 00:07:20.779 7410.609 - 7461.022: 46.2063% ( 288) 00:07:20.779 7461.022 - 7511.434: 48.0388% ( 285) 00:07:20.779 7511.434 - 7561.846: 49.8521% ( 282) 00:07:20.779 7561.846 - 7612.258: 51.5561% ( 265) 00:07:20.779 7612.258 - 7662.671: 53.1636% ( 250) 00:07:20.779 7662.671 - 7713.083: 54.6746% ( 235) 00:07:20.779 7713.083 - 7763.495: 56.1150% ( 224) 00:07:20.779 7763.495 - 7813.908: 57.5360% ( 221) 00:07:20.779 7813.908 - 7864.320: 58.9185% ( 215) 00:07:20.779 7864.320 - 7914.732: 60.0952% ( 183) 00:07:20.779 7914.732 - 7965.145: 61.2076% ( 173) 00:07:20.779 7965.145 - 8015.557: 62.2106% ( 156) 00:07:20.779 8015.557 - 8065.969: 63.2137% ( 156) 00:07:20.779 8065.969 - 8116.382: 64.1204% ( 141) 00:07:20.779 8116.382 - 8166.794: 64.9177% ( 124) 00:07:20.779 8166.794 - 8217.206: 65.6057% ( 107) 00:07:20.779 8217.206 - 8267.618: 66.3194% ( 111) 00:07:20.779 8267.618 - 8318.031: 67.0139% ( 108) 00:07:20.779 8318.031 - 8368.443: 67.6955% ( 106) 00:07:20.779 8368.443 - 8418.855: 68.3128% ( 96) 00:07:20.779 8418.855 - 8469.268: 68.9108% ( 93) 00:07:20.779 8469.268 - 8519.680: 69.5538% ( 100) 00:07:20.779 8519.680 - 8570.092: 70.2353% ( 106) 00:07:20.779 8570.092 - 8620.505: 70.7690% ( 83) 00:07:20.779 8620.505 - 8670.917: 71.3670% ( 93) 00:07:20.779 8670.917 - 8721.329: 71.9843% ( 96) 00:07:20.779 8721.329 - 8771.742: 72.6723% ( 107) 00:07:20.779 8771.742 - 8822.154: 73.3989% ( 113) 00:07:20.779 8822.154 - 8872.566: 74.0934% ( 108) 00:07:20.779 8872.566 - 8922.978: 74.7042% ( 95) 00:07:20.779 8922.978 - 8973.391: 75.2636% ( 87) 00:07:20.779 8973.391 - 9023.803: 75.8359% ( 89) 00:07:20.779 9023.803 - 9074.215: 76.3889% ( 86) 00:07:20.779 9074.215 - 9124.628: 76.9676% ( 90) 00:07:20.779 9124.628 - 9175.040: 77.5527% ( 91) 00:07:20.779 9175.040 - 9225.452: 78.1314% ( 90) 00:07:20.779 9225.452 - 9275.865: 78.6716% ( 84) 00:07:20.779 9275.865 - 9326.277: 79.2052% ( 83) 00:07:20.779 9326.277 - 9376.689: 79.8097% ( 94) 00:07:20.779 9376.689 - 9427.102: 80.3691% ( 87) 00:07:20.779 9427.102 - 9477.514: 80.9606% ( 92) 00:07:20.779 9477.514 - 9527.926: 81.4943% ( 83) 00:07:20.779 9527.926 - 9578.338: 82.0087% ( 80) 00:07:20.779 9578.338 - 9628.751: 82.5103% ( 78) 00:07:20.779 9628.751 - 9679.163: 82.9668% ( 71) 00:07:20.779 9679.163 - 9729.575: 83.4041% ( 68) 00:07:20.779 9729.575 - 9779.988: 83.8799% ( 74) 00:07:20.779 9779.988 - 9830.400: 84.3686% ( 76) 00:07:20.779 9830.400 - 9880.812: 84.8251% ( 71) 00:07:20.779 9880.812 - 9931.225: 85.3331% ( 79) 00:07:20.779 9931.225 - 9981.637: 85.8218% ( 76) 00:07:20.779 9981.637 - 10032.049: 86.3169% ( 77) 00:07:20.779 10032.049 - 10082.462: 86.7477% ( 67) 00:07:20.779 10082.462 - 10132.874: 87.2492% ( 78) 00:07:20.779 10132.874 - 10183.286: 87.7186% ( 73) 00:07:20.779 10183.286 - 10233.698: 88.1752% ( 71) 00:07:20.779 10233.698 - 10284.111: 88.6124% ( 68) 00:07:20.779 10284.111 - 10334.523: 89.0239% ( 64) 00:07:20.779 10334.523 - 10384.935: 89.3904% ( 57) 00:07:20.779 10384.935 - 10435.348: 89.7055% ( 49) 00:07:20.779 10435.348 - 10485.760: 90.0013% ( 46) 00:07:20.779 10485.760 - 10536.172: 90.2649% ( 41) 00:07:20.779 10536.172 - 10586.585: 90.5285% ( 41) 00:07:20.779 10586.585 - 10636.997: 90.7858% ( 40) 00:07:20.779 10636.997 - 10687.409: 91.0172% ( 36) 00:07:20.779 10687.409 - 10737.822: 91.2359% ( 34) 00:07:20.779 10737.822 - 10788.234: 91.4673% ( 36) 00:07:20.779 10788.234 - 10838.646: 91.6602% ( 30) 00:07:20.779 10838.646 - 10889.058: 91.8338% ( 27) 00:07:20.779 10889.058 - 10939.471: 91.9882% ( 24) 00:07:20.779 10939.471 - 10989.883: 92.1168% ( 20) 00:07:20.779 10989.883 - 11040.295: 92.2518% ( 21) 00:07:20.779 11040.295 - 11090.708: 92.4126% ( 25) 00:07:20.779 11090.708 - 11141.120: 92.5669% ( 24) 00:07:20.779 11141.120 - 11191.532: 92.6955% ( 20) 00:07:20.779 11191.532 - 11241.945: 92.8305% ( 21) 00:07:20.779 11241.945 - 11292.357: 92.9655% ( 21) 00:07:20.779 11292.357 - 11342.769: 93.1134% ( 23) 00:07:20.779 11342.769 - 11393.182: 93.3128% ( 31) 00:07:20.779 11393.182 - 11443.594: 93.4606% ( 23) 00:07:20.779 11443.594 - 11494.006: 93.6085% ( 23) 00:07:20.779 11494.006 - 11544.418: 93.7886% ( 28) 00:07:20.779 11544.418 - 11594.831: 93.9493% ( 25) 00:07:20.779 11594.831 - 11645.243: 94.1165% ( 26) 00:07:20.779 11645.243 - 11695.655: 94.2580% ( 22) 00:07:20.779 11695.655 - 11746.068: 94.3994% ( 22) 00:07:20.779 11746.068 - 11796.480: 94.5602% ( 25) 00:07:20.779 11796.480 - 11846.892: 94.6824% ( 19) 00:07:20.779 11846.892 - 11897.305: 94.8174% ( 21) 00:07:20.779 11897.305 - 11947.717: 94.9588% ( 22) 00:07:20.779 11947.717 - 11998.129: 95.0939% ( 21) 00:07:20.779 11998.129 - 12048.542: 95.2289% ( 21) 00:07:20.779 12048.542 - 12098.954: 95.3768% ( 23) 00:07:20.779 12098.954 - 12149.366: 95.5118% ( 21) 00:07:20.779 12149.366 - 12199.778: 95.6211% ( 17) 00:07:20.779 12199.778 - 12250.191: 95.7176% ( 15) 00:07:20.779 12250.191 - 12300.603: 95.8140% ( 15) 00:07:20.779 12300.603 - 12351.015: 95.8976% ( 13) 00:07:20.779 12351.015 - 12401.428: 95.9684% ( 11) 00:07:20.779 12401.428 - 12451.840: 96.0712% ( 16) 00:07:20.779 12451.840 - 12502.252: 96.1613% ( 14) 00:07:20.779 12502.252 - 12552.665: 96.2641% ( 16) 00:07:20.779 12552.665 - 12603.077: 96.3799% ( 18) 00:07:20.779 12603.077 - 12653.489: 96.4892% ( 17) 00:07:20.779 12653.489 - 12703.902: 96.5792% ( 14) 00:07:20.779 12703.902 - 12754.314: 96.6692% ( 14) 00:07:20.779 12754.314 - 12804.726: 96.7593% ( 14) 00:07:20.779 12804.726 - 12855.138: 96.8428% ( 13) 00:07:20.779 12855.138 - 12905.551: 96.9522% ( 17) 00:07:20.779 12905.551 - 13006.375: 97.1579% ( 32) 00:07:20.779 13006.375 - 13107.200: 97.3380% ( 28) 00:07:20.779 13107.200 - 13208.025: 97.5116% ( 27) 00:07:20.779 13208.025 - 13308.849: 97.6466% ( 21) 00:07:20.779 13308.849 - 13409.674: 97.7366% ( 14) 00:07:20.779 13409.674 - 13510.498: 97.8138% ( 12) 00:07:20.779 13510.498 - 13611.323: 97.8974% ( 13) 00:07:20.779 13611.323 - 13712.148: 97.9810% ( 13) 00:07:20.779 13712.148 - 13812.972: 98.0967% ( 18) 00:07:20.779 13812.972 - 13913.797: 98.1867% ( 14) 00:07:20.779 13913.797 - 14014.622: 98.2639% ( 12) 00:07:20.779 14014.622 - 14115.446: 98.3539% ( 14) 00:07:20.779 14115.446 - 14216.271: 98.4375% ( 13) 00:07:20.779 14216.271 - 14317.095: 98.5725% ( 21) 00:07:20.779 14317.095 - 14417.920: 98.6947% ( 19) 00:07:20.779 14417.920 - 14518.745: 98.8040% ( 17) 00:07:20.779 14518.745 - 14619.569: 98.8876% ( 13) 00:07:20.779 14619.569 - 14720.394: 98.9648% ( 12) 00:07:20.779 14720.394 - 14821.218: 99.0291% ( 10) 00:07:20.779 14821.218 - 14922.043: 99.0612% ( 5) 00:07:20.779 14922.043 - 15022.868: 99.0998% ( 6) 00:07:20.779 15022.868 - 15123.692: 99.1448% ( 7) 00:07:20.779 15123.692 - 15224.517: 99.1770% ( 5) 00:07:20.779 15526.991 - 15627.815: 99.2027% ( 4) 00:07:20.779 15627.815 - 15728.640: 99.2348% ( 5) 00:07:20.779 15728.640 - 15829.465: 99.2605% ( 4) 00:07:20.779 15829.465 - 15930.289: 99.2863% ( 4) 00:07:20.779 15930.289 - 16031.114: 99.3120% ( 4) 00:07:20.779 16031.114 - 16131.938: 99.3441% ( 5) 00:07:20.779 16131.938 - 16232.763: 99.3699% ( 4) 00:07:20.779 16232.763 - 16333.588: 99.3956% ( 4) 00:07:20.779 16333.588 - 16434.412: 99.4213% ( 4) 00:07:20.779 16434.412 - 16535.237: 99.4470% ( 4) 00:07:20.779 16535.237 - 16636.062: 99.4727% ( 4) 00:07:20.779 16636.062 - 16736.886: 99.5049% ( 5) 00:07:20.779 16736.886 - 16837.711: 99.5306% ( 4) 00:07:20.779 16837.711 - 16938.535: 99.5563% ( 4) 00:07:20.779 16938.535 - 17039.360: 99.5885% ( 5) 00:07:20.779 20064.098 - 20164.923: 99.5949% ( 1) 00:07:20.779 20164.923 - 20265.748: 99.6206% ( 4) 00:07:20.779 20265.748 - 20366.572: 99.6463% ( 4) 00:07:20.779 20366.572 - 20467.397: 99.6721% ( 4) 00:07:20.779 20467.397 - 20568.222: 99.6978% ( 4) 00:07:20.779 20568.222 - 20669.046: 99.7235% ( 4) 00:07:20.779 20669.046 - 20769.871: 99.7492% ( 4) 00:07:20.779 20769.871 - 20870.695: 99.7749% ( 4) 00:07:20.780 20870.695 - 20971.520: 99.8071% ( 5) 00:07:20.780 20971.520 - 21072.345: 99.8328% ( 4) 00:07:20.780 21072.345 - 21173.169: 99.8585% ( 4) 00:07:20.780 21173.169 - 21273.994: 99.8843% ( 4) 00:07:20.780 21273.994 - 21374.818: 99.9164% ( 5) 00:07:20.780 21374.818 - 21475.643: 99.9421% ( 4) 00:07:20.780 21475.643 - 21576.468: 99.9678% ( 4) 00:07:20.780 21576.468 - 21677.292: 100.0000% ( 5) 00:07:20.780 00:07:20.780 09:39:08 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:21.718 Initializing NVMe Controllers 00:07:21.718 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:21.718 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:21.718 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:21.718 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:21.718 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:21.718 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:21.718 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:21.718 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:21.718 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:21.718 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:21.718 Initialization complete. Launching workers. 00:07:21.718 ======================================================== 00:07:21.718 Latency(us) 00:07:21.718 Device Information : IOPS MiB/s Average min max 00:07:21.718 PCIE (0000:00:10.0) NSID 1 from core 0: 17704.91 207.48 7239.31 5555.11 32134.95 00:07:21.718 PCIE (0000:00:11.0) NSID 1 from core 0: 17704.91 207.48 7228.23 5758.62 30341.12 00:07:21.718 PCIE (0000:00:13.0) NSID 1 from core 0: 17704.91 207.48 7216.95 5569.28 28966.65 00:07:21.718 PCIE (0000:00:12.0) NSID 1 from core 0: 17704.91 207.48 7205.45 5736.92 27116.48 00:07:21.718 PCIE (0000:00:12.0) NSID 2 from core 0: 17704.91 207.48 7194.22 5819.26 25402.07 00:07:21.718 PCIE (0000:00:12.0) NSID 3 from core 0: 17768.83 208.23 7157.17 5649.55 20142.53 00:07:21.718 ======================================================== 00:07:21.718 Total : 106293.39 1245.63 7206.86 5555.11 32134.95 00:07:21.718 00:07:21.718 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:21.718 ================================================================================= 00:07:21.718 1.00000% : 5973.858us 00:07:21.718 10.00000% : 6326.745us 00:07:21.718 25.00000% : 6553.600us 00:07:21.718 50.00000% : 6856.074us 00:07:21.718 75.00000% : 7360.197us 00:07:21.718 90.00000% : 8519.680us 00:07:21.718 95.00000% : 8973.391us 00:07:21.718 98.00000% : 9679.163us 00:07:21.718 99.00000% : 10334.523us 00:07:21.718 99.50000% : 26617.698us 00:07:21.718 99.90000% : 31860.578us 00:07:21.718 99.99000% : 32263.877us 00:07:21.718 99.99900% : 32263.877us 00:07:21.718 99.99990% : 32263.877us 00:07:21.718 99.99999% : 32263.877us 00:07:21.718 00:07:21.718 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:21.718 ================================================================================= 00:07:21.718 1.00000% : 6074.683us 00:07:21.718 10.00000% : 6377.157us 00:07:21.718 25.00000% : 6604.012us 00:07:21.718 50.00000% : 6856.074us 00:07:21.718 75.00000% : 7259.372us 00:07:21.718 90.00000% : 8519.680us 00:07:21.718 95.00000% : 8872.566us 00:07:21.718 98.00000% : 9527.926us 00:07:21.718 99.00000% : 10183.286us 00:07:21.718 99.50000% : 24802.855us 00:07:21.718 99.90000% : 30045.735us 00:07:21.718 99.99000% : 30449.034us 00:07:21.718 99.99900% : 30449.034us 00:07:21.718 99.99990% : 30449.034us 00:07:21.718 99.99999% : 30449.034us 00:07:21.718 00:07:21.718 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:21.718 ================================================================================= 00:07:21.718 1.00000% : 6099.889us 00:07:21.718 10.00000% : 6351.951us 00:07:21.718 25.00000% : 6604.012us 00:07:21.718 50.00000% : 6856.074us 00:07:21.718 75.00000% : 7259.372us 00:07:21.718 90.00000% : 8519.680us 00:07:21.718 95.00000% : 8922.978us 00:07:21.718 98.00000% : 9578.338us 00:07:21.718 99.00000% : 10284.111us 00:07:21.718 99.50000% : 23895.434us 00:07:21.718 99.90000% : 28634.191us 00:07:21.718 99.99000% : 29037.489us 00:07:21.718 99.99900% : 29037.489us 00:07:21.718 99.99990% : 29037.489us 00:07:21.718 99.99999% : 29037.489us 00:07:21.718 00:07:21.718 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:21.718 ================================================================================= 00:07:21.718 1.00000% : 6099.889us 00:07:21.718 10.00000% : 6351.951us 00:07:21.718 25.00000% : 6604.012us 00:07:21.718 50.00000% : 6856.074us 00:07:21.718 75.00000% : 7259.372us 00:07:21.718 90.00000% : 8519.680us 00:07:21.718 95.00000% : 8872.566us 00:07:21.718 98.00000% : 9679.163us 00:07:21.718 99.00000% : 10435.348us 00:07:21.718 99.50000% : 22080.591us 00:07:21.718 99.90000% : 26819.348us 00:07:21.718 99.99000% : 27222.646us 00:07:21.718 99.99900% : 27222.646us 00:07:21.718 99.99990% : 27222.646us 00:07:21.718 99.99999% : 27222.646us 00:07:21.718 00:07:21.718 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:21.718 ================================================================================= 00:07:21.718 1.00000% : 6099.889us 00:07:21.718 10.00000% : 6377.157us 00:07:21.718 25.00000% : 6604.012us 00:07:21.718 50.00000% : 6856.074us 00:07:21.718 75.00000% : 7309.785us 00:07:21.718 90.00000% : 8519.680us 00:07:21.718 95.00000% : 8922.978us 00:07:21.718 98.00000% : 9578.338us 00:07:21.718 99.00000% : 10334.523us 00:07:21.718 99.50000% : 20265.748us 00:07:21.718 99.90000% : 25004.505us 00:07:21.718 99.99000% : 25407.803us 00:07:21.718 99.99900% : 25407.803us 00:07:21.718 99.99990% : 25407.803us 00:07:21.718 99.99999% : 25407.803us 00:07:21.718 00:07:21.718 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:21.718 ================================================================================= 00:07:21.718 1.00000% : 6099.889us 00:07:21.718 10.00000% : 6377.157us 00:07:21.718 25.00000% : 6604.012us 00:07:21.718 50.00000% : 6856.074us 00:07:21.718 75.00000% : 7309.785us 00:07:21.718 90.00000% : 8519.680us 00:07:21.718 95.00000% : 8872.566us 00:07:21.718 98.00000% : 9527.926us 00:07:21.718 99.00000% : 10284.111us 00:07:21.718 99.50000% : 14619.569us 00:07:21.718 99.90000% : 19761.625us 00:07:21.718 99.99000% : 20164.923us 00:07:21.718 99.99900% : 20164.923us 00:07:21.718 99.99990% : 20164.923us 00:07:21.718 99.99999% : 20164.923us 00:07:21.718 00:07:21.718 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:21.718 ============================================================================== 00:07:21.718 Range in us Cumulative IO count 00:07:21.718 5545.354 - 5570.560: 0.0056% ( 1) 00:07:21.718 5570.560 - 5595.766: 0.0169% ( 2) 00:07:21.719 5595.766 - 5620.972: 0.0282% ( 2) 00:07:21.719 5620.972 - 5646.178: 0.0564% ( 5) 00:07:21.719 5646.178 - 5671.385: 0.0846% ( 5) 00:07:21.719 5671.385 - 5696.591: 0.1185% ( 6) 00:07:21.719 5696.591 - 5721.797: 0.1410% ( 4) 00:07:21.719 5721.797 - 5747.003: 0.1636% ( 4) 00:07:21.719 5747.003 - 5772.209: 0.1861% ( 4) 00:07:21.719 5772.209 - 5797.415: 0.2256% ( 7) 00:07:21.719 5797.415 - 5822.622: 0.2877% ( 11) 00:07:21.719 5822.622 - 5847.828: 0.3497% ( 11) 00:07:21.719 5847.828 - 5873.034: 0.4400% ( 16) 00:07:21.719 5873.034 - 5898.240: 0.5302% ( 16) 00:07:21.719 5898.240 - 5923.446: 0.6713% ( 25) 00:07:21.719 5923.446 - 5948.652: 0.8405% ( 30) 00:07:21.719 5948.652 - 5973.858: 1.0887% ( 44) 00:07:21.719 5973.858 - 5999.065: 1.4271% ( 60) 00:07:21.719 5999.065 - 6024.271: 1.7261% ( 53) 00:07:21.719 6024.271 - 6049.477: 2.0927% ( 65) 00:07:21.719 6049.477 - 6074.683: 2.4481% ( 63) 00:07:21.719 6074.683 - 6099.889: 2.9727% ( 93) 00:07:21.719 6099.889 - 6125.095: 3.3788% ( 72) 00:07:21.719 6125.095 - 6150.302: 3.8583% ( 85) 00:07:21.719 6150.302 - 6175.508: 4.4619% ( 107) 00:07:21.719 6175.508 - 6200.714: 5.2065% ( 132) 00:07:21.719 6200.714 - 6225.920: 5.9454% ( 131) 00:07:21.719 6225.920 - 6251.126: 6.9325% ( 175) 00:07:21.719 6251.126 - 6276.332: 7.9084% ( 173) 00:07:21.719 6276.332 - 6301.538: 9.2340% ( 235) 00:07:21.719 6301.538 - 6326.745: 10.6216% ( 246) 00:07:21.719 6326.745 - 6351.951: 12.1277% ( 267) 00:07:21.719 6351.951 - 6377.157: 13.5774% ( 257) 00:07:21.719 6377.157 - 6402.363: 15.3317% ( 311) 00:07:21.719 6402.363 - 6427.569: 16.9957% ( 295) 00:07:21.719 6427.569 - 6452.775: 18.6203% ( 288) 00:07:21.719 6452.775 - 6503.188: 21.8355% ( 570) 00:07:21.719 6503.188 - 6553.600: 25.9702% ( 733) 00:07:21.719 6553.600 - 6604.012: 30.8269% ( 861) 00:07:21.719 6604.012 - 6654.425: 35.3057% ( 794) 00:07:21.719 6654.425 - 6704.837: 39.7845% ( 794) 00:07:21.719 6704.837 - 6755.249: 44.1674% ( 777) 00:07:21.719 6755.249 - 6805.662: 48.4770% ( 764) 00:07:21.719 6805.662 - 6856.074: 52.3240% ( 682) 00:07:21.719 6856.074 - 6906.486: 55.4603% ( 556) 00:07:21.719 6906.486 - 6956.898: 58.5458% ( 547) 00:07:21.719 6956.898 - 7007.311: 61.4677% ( 518) 00:07:21.719 7007.311 - 7057.723: 63.8707% ( 426) 00:07:21.719 7057.723 - 7108.135: 66.1552% ( 405) 00:07:21.719 7108.135 - 7158.548: 68.4003% ( 398) 00:07:21.719 7158.548 - 7208.960: 70.3858% ( 352) 00:07:21.719 7208.960 - 7259.372: 72.4222% ( 361) 00:07:21.719 7259.372 - 7309.785: 74.1031% ( 298) 00:07:21.719 7309.785 - 7360.197: 75.6261% ( 270) 00:07:21.719 7360.197 - 7410.609: 76.8502% ( 217) 00:07:21.719 7410.609 - 7461.022: 78.0347% ( 210) 00:07:21.719 7461.022 - 7511.434: 78.8921% ( 152) 00:07:21.719 7511.434 - 7561.846: 79.5408% ( 115) 00:07:21.719 7561.846 - 7612.258: 80.0993% ( 99) 00:07:21.719 7612.258 - 7662.671: 80.6069% ( 90) 00:07:21.719 7662.671 - 7713.083: 81.0356% ( 76) 00:07:21.719 7713.083 - 7763.495: 81.4474% ( 73) 00:07:21.719 7763.495 - 7813.908: 81.8761% ( 76) 00:07:21.719 7813.908 - 7864.320: 82.3105% ( 77) 00:07:21.719 7864.320 - 7914.732: 82.8012% ( 87) 00:07:21.719 7914.732 - 7965.145: 83.2920% ( 87) 00:07:21.719 7965.145 - 8015.557: 83.8899% ( 106) 00:07:21.719 8015.557 - 8065.969: 84.4653% ( 102) 00:07:21.719 8065.969 - 8116.382: 85.2493% ( 139) 00:07:21.719 8116.382 - 8166.794: 86.0390% ( 140) 00:07:21.719 8166.794 - 8217.206: 86.7441% ( 125) 00:07:21.719 8217.206 - 8267.618: 87.4267% ( 121) 00:07:21.719 8267.618 - 8318.031: 87.8949% ( 83) 00:07:21.719 8318.031 - 8368.443: 88.4420% ( 97) 00:07:21.719 8368.443 - 8418.855: 89.0512% ( 108) 00:07:21.719 8418.855 - 8469.268: 89.8127% ( 135) 00:07:21.719 8469.268 - 8519.680: 90.7378% ( 164) 00:07:21.719 8519.680 - 8570.092: 91.5219% ( 139) 00:07:21.719 8570.092 - 8620.505: 92.0239% ( 89) 00:07:21.719 8620.505 - 8670.917: 92.4583% ( 77) 00:07:21.719 8670.917 - 8721.329: 92.9377% ( 85) 00:07:21.719 8721.329 - 8771.742: 93.4116% ( 84) 00:07:21.719 8771.742 - 8822.154: 93.8346% ( 75) 00:07:21.719 8822.154 - 8872.566: 94.2577% ( 75) 00:07:21.719 8872.566 - 8922.978: 94.7879% ( 94) 00:07:21.719 8922.978 - 8973.391: 95.2843% ( 88) 00:07:21.719 8973.391 - 9023.803: 95.6058% ( 57) 00:07:21.719 9023.803 - 9074.215: 95.9950% ( 69) 00:07:21.719 9074.215 - 9124.628: 96.3730% ( 67) 00:07:21.719 9124.628 - 9175.040: 96.6832% ( 55) 00:07:21.719 9175.040 - 9225.452: 96.9201% ( 42) 00:07:21.719 9225.452 - 9275.865: 97.0781% ( 28) 00:07:21.719 9275.865 - 9326.277: 97.2247% ( 26) 00:07:21.719 9326.277 - 9376.689: 97.3545% ( 23) 00:07:21.719 9376.689 - 9427.102: 97.5124% ( 28) 00:07:21.719 9427.102 - 9477.514: 97.6139% ( 18) 00:07:21.719 9477.514 - 9527.926: 97.7098% ( 17) 00:07:21.719 9527.926 - 9578.338: 97.8114% ( 18) 00:07:21.719 9578.338 - 9628.751: 97.9411% ( 23) 00:07:21.719 9628.751 - 9679.163: 98.1329% ( 34) 00:07:21.719 9679.163 - 9729.575: 98.2852% ( 27) 00:07:21.719 9729.575 - 9779.988: 98.3867% ( 18) 00:07:21.719 9779.988 - 9830.400: 98.5165% ( 23) 00:07:21.719 9830.400 - 9880.812: 98.6011% ( 15) 00:07:21.719 9880.812 - 9931.225: 98.6857% ( 15) 00:07:21.719 9931.225 - 9981.637: 98.7421% ( 10) 00:07:21.719 9981.637 - 10032.049: 98.7872% ( 8) 00:07:21.719 10032.049 - 10082.462: 98.8267% ( 7) 00:07:21.719 10082.462 - 10132.874: 98.8775% ( 9) 00:07:21.719 10132.874 - 10183.286: 98.9170% ( 7) 00:07:21.719 10183.286 - 10233.698: 98.9508% ( 6) 00:07:21.719 10233.698 - 10284.111: 98.9790% ( 5) 00:07:21.719 10284.111 - 10334.523: 99.0016% ( 4) 00:07:21.719 10334.523 - 10384.935: 99.0354% ( 6) 00:07:21.719 10384.935 - 10435.348: 99.0636% ( 5) 00:07:21.719 10435.348 - 10485.760: 99.0918% ( 5) 00:07:21.719 10485.760 - 10536.172: 99.1257% ( 6) 00:07:21.719 10536.172 - 10586.585: 99.1539% ( 5) 00:07:21.719 10586.585 - 10636.997: 99.1877% ( 6) 00:07:21.719 10636.997 - 10687.409: 99.1990% ( 2) 00:07:21.719 10687.409 - 10737.822: 99.2103% ( 2) 00:07:21.719 10737.822 - 10788.234: 99.2272% ( 3) 00:07:21.719 10788.234 - 10838.646: 99.2385% ( 2) 00:07:21.719 10838.646 - 10889.058: 99.2498% ( 2) 00:07:21.719 10889.058 - 10939.471: 99.2611% ( 2) 00:07:21.719 10939.471 - 10989.883: 99.2780% ( 3) 00:07:21.719 25710.277 - 25811.102: 99.2893% ( 2) 00:07:21.719 25811.102 - 26012.751: 99.4021% ( 20) 00:07:21.719 26012.751 - 26214.400: 99.4528% ( 9) 00:07:21.719 26214.400 - 26416.049: 99.4980% ( 8) 00:07:21.719 26416.049 - 26617.698: 99.5318% ( 6) 00:07:21.719 26617.698 - 26819.348: 99.5600% ( 5) 00:07:21.719 26819.348 - 27020.997: 99.5995% ( 7) 00:07:21.719 27020.997 - 27222.646: 99.6390% ( 7) 00:07:21.719 30449.034 - 30650.683: 99.6728% ( 6) 00:07:21.719 30650.683 - 30852.332: 99.7180% ( 8) 00:07:21.719 30852.332 - 31053.982: 99.7574% ( 7) 00:07:21.719 31053.982 - 31255.631: 99.8082% ( 9) 00:07:21.719 31255.631 - 31457.280: 99.8533% ( 8) 00:07:21.719 31457.280 - 31658.929: 99.8928% ( 7) 00:07:21.719 31658.929 - 31860.578: 99.9380% ( 8) 00:07:21.719 31860.578 - 32062.228: 99.9831% ( 8) 00:07:21.719 32062.228 - 32263.877: 100.0000% ( 3) 00:07:21.719 00:07:21.719 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:21.719 ============================================================================== 00:07:21.719 Range in us Cumulative IO count 00:07:21.719 5747.003 - 5772.209: 0.0056% ( 1) 00:07:21.719 5873.034 - 5898.240: 0.0226% ( 3) 00:07:21.719 5898.240 - 5923.446: 0.0451% ( 4) 00:07:21.719 5923.446 - 5948.652: 0.0903% ( 8) 00:07:21.719 5948.652 - 5973.858: 0.1692% ( 14) 00:07:21.719 5973.858 - 5999.065: 0.3215% ( 27) 00:07:21.719 5999.065 - 6024.271: 0.5641% ( 43) 00:07:21.719 6024.271 - 6049.477: 0.8010% ( 42) 00:07:21.719 6049.477 - 6074.683: 1.0887% ( 51) 00:07:21.719 6074.683 - 6099.889: 1.3425% ( 45) 00:07:21.719 6099.889 - 6125.095: 1.7881% ( 79) 00:07:21.719 6125.095 - 6150.302: 2.0645% ( 49) 00:07:21.719 6150.302 - 6175.508: 2.5271% ( 82) 00:07:21.719 6175.508 - 6200.714: 2.9558% ( 76) 00:07:21.719 6200.714 - 6225.920: 3.5819% ( 111) 00:07:21.719 6225.920 - 6251.126: 4.2588% ( 120) 00:07:21.719 6251.126 - 6276.332: 5.3023% ( 185) 00:07:21.719 6276.332 - 6301.538: 6.3910% ( 193) 00:07:21.719 6301.538 - 6326.745: 7.4233% ( 183) 00:07:21.719 6326.745 - 6351.951: 9.0986% ( 297) 00:07:21.719 6351.951 - 6377.157: 10.6385% ( 273) 00:07:21.719 6377.157 - 6402.363: 12.3421% ( 302) 00:07:21.719 6402.363 - 6427.569: 14.1584% ( 322) 00:07:21.719 6427.569 - 6452.775: 15.5347% ( 244) 00:07:21.719 6452.775 - 6503.188: 18.9756% ( 610) 00:07:21.719 6503.188 - 6553.600: 22.8396% ( 685) 00:07:21.719 6553.600 - 6604.012: 25.9251% ( 547) 00:07:21.719 6604.012 - 6654.425: 30.1500% ( 749) 00:07:21.719 6654.425 - 6704.837: 35.1986% ( 895) 00:07:21.719 6704.837 - 6755.249: 41.6911% ( 1151) 00:07:21.719 6755.249 - 6805.662: 48.3360% ( 1178) 00:07:21.719 6805.662 - 6856.074: 54.7552% ( 1138) 00:07:21.719 6856.074 - 6906.486: 59.2340% ( 794) 00:07:21.719 6906.486 - 6956.898: 62.8384% ( 639) 00:07:21.719 6956.898 - 7007.311: 66.6121% ( 669) 00:07:21.719 7007.311 - 7057.723: 69.1731% ( 454) 00:07:21.719 7057.723 - 7108.135: 70.7750% ( 284) 00:07:21.719 7108.135 - 7158.548: 72.3940% ( 287) 00:07:21.719 7158.548 - 7208.960: 73.8718% ( 262) 00:07:21.719 7208.960 - 7259.372: 75.1015% ( 218) 00:07:21.719 7259.372 - 7309.785: 76.0379% ( 166) 00:07:21.719 7309.785 - 7360.197: 77.1717% ( 201) 00:07:21.719 7360.197 - 7410.609: 78.3619% ( 211) 00:07:21.720 7410.609 - 7461.022: 78.9091% ( 97) 00:07:21.720 7461.022 - 7511.434: 79.3208% ( 73) 00:07:21.720 7511.434 - 7561.846: 79.7044% ( 68) 00:07:21.720 7561.846 - 7612.258: 80.2798% ( 102) 00:07:21.720 7612.258 - 7662.671: 80.9962% ( 127) 00:07:21.720 7662.671 - 7713.083: 81.4305% ( 77) 00:07:21.720 7713.083 - 7763.495: 81.8084% ( 67) 00:07:21.720 7763.495 - 7813.908: 82.1018% ( 52) 00:07:21.720 7813.908 - 7864.320: 82.3274% ( 40) 00:07:21.720 7864.320 - 7914.732: 82.5981% ( 48) 00:07:21.720 7914.732 - 7965.145: 82.9817% ( 68) 00:07:21.720 7965.145 - 8015.557: 83.2976% ( 56) 00:07:21.720 8015.557 - 8065.969: 83.7037% ( 72) 00:07:21.720 8065.969 - 8116.382: 84.0309% ( 58) 00:07:21.720 8116.382 - 8166.794: 84.5668% ( 95) 00:07:21.720 8166.794 - 8217.206: 85.1252% ( 99) 00:07:21.720 8217.206 - 8267.618: 85.8867% ( 135) 00:07:21.720 8267.618 - 8318.031: 86.6257% ( 131) 00:07:21.720 8318.031 - 8368.443: 87.6072% ( 174) 00:07:21.720 8368.443 - 8418.855: 88.4420% ( 148) 00:07:21.720 8418.855 - 8469.268: 89.6999% ( 223) 00:07:21.720 8469.268 - 8519.680: 90.7942% ( 194) 00:07:21.720 8519.680 - 8570.092: 91.6234% ( 147) 00:07:21.720 8570.092 - 8620.505: 92.4301% ( 143) 00:07:21.720 8620.505 - 8670.917: 93.2310% ( 142) 00:07:21.720 8670.917 - 8721.329: 93.9136% ( 121) 00:07:21.720 8721.329 - 8771.742: 94.4833% ( 101) 00:07:21.720 8771.742 - 8822.154: 94.9741% ( 87) 00:07:21.720 8822.154 - 8872.566: 95.3125% ( 60) 00:07:21.720 8872.566 - 8922.978: 95.5776% ( 47) 00:07:21.720 8922.978 - 8973.391: 95.8145% ( 42) 00:07:21.720 8973.391 - 9023.803: 96.0853% ( 48) 00:07:21.720 9023.803 - 9074.215: 96.3222% ( 42) 00:07:21.720 9074.215 - 9124.628: 96.5083% ( 33) 00:07:21.720 9124.628 - 9175.040: 96.6832% ( 31) 00:07:21.720 9175.040 - 9225.452: 96.8694% ( 33) 00:07:21.720 9225.452 - 9275.865: 97.2134% ( 61) 00:07:21.720 9275.865 - 9326.277: 97.4052% ( 34) 00:07:21.720 9326.277 - 9376.689: 97.6027% ( 35) 00:07:21.720 9376.689 - 9427.102: 97.7380% ( 24) 00:07:21.720 9427.102 - 9477.514: 97.8621% ( 22) 00:07:21.720 9477.514 - 9527.926: 98.0314% ( 30) 00:07:21.720 9527.926 - 9578.338: 98.1555% ( 22) 00:07:21.720 9578.338 - 9628.751: 98.2570% ( 18) 00:07:21.720 9628.751 - 9679.163: 98.3472% ( 16) 00:07:21.720 9679.163 - 9729.575: 98.4149% ( 12) 00:07:21.720 9729.575 - 9779.988: 98.4826% ( 12) 00:07:21.720 9779.988 - 9830.400: 98.5560% ( 13) 00:07:21.720 9830.400 - 9880.812: 98.6349% ( 14) 00:07:21.720 9880.812 - 9931.225: 98.6970% ( 11) 00:07:21.720 9931.225 - 9981.637: 98.7534% ( 10) 00:07:21.720 9981.637 - 10032.049: 98.7985% ( 8) 00:07:21.720 10032.049 - 10082.462: 98.8775% ( 14) 00:07:21.720 10082.462 - 10132.874: 98.9395% ( 11) 00:07:21.720 10132.874 - 10183.286: 99.0016% ( 11) 00:07:21.720 10183.286 - 10233.698: 99.0580% ( 10) 00:07:21.720 10233.698 - 10284.111: 99.0975% ( 7) 00:07:21.720 10284.111 - 10334.523: 99.1257% ( 5) 00:07:21.720 10334.523 - 10384.935: 99.1426% ( 3) 00:07:21.720 10384.935 - 10435.348: 99.1539% ( 2) 00:07:21.720 10435.348 - 10485.760: 99.1708% ( 3) 00:07:21.720 10485.760 - 10536.172: 99.1877% ( 3) 00:07:21.720 10536.172 - 10586.585: 99.2103% ( 4) 00:07:21.720 10586.585 - 10636.997: 99.2272% ( 3) 00:07:21.720 10636.997 - 10687.409: 99.2498% ( 4) 00:07:21.720 10687.409 - 10737.822: 99.2723% ( 4) 00:07:21.720 10737.822 - 10788.234: 99.2780% ( 1) 00:07:21.720 23693.785 - 23794.609: 99.2836% ( 1) 00:07:21.720 23794.609 - 23895.434: 99.3062% ( 4) 00:07:21.720 23895.434 - 23996.258: 99.3287% ( 4) 00:07:21.720 23996.258 - 24097.083: 99.3513% ( 4) 00:07:21.720 24097.083 - 24197.908: 99.3739% ( 4) 00:07:21.720 24197.908 - 24298.732: 99.3964% ( 4) 00:07:21.720 24298.732 - 24399.557: 99.4190% ( 4) 00:07:21.720 24399.557 - 24500.382: 99.4472% ( 5) 00:07:21.720 24500.382 - 24601.206: 99.4698% ( 4) 00:07:21.720 24601.206 - 24702.031: 99.4923% ( 4) 00:07:21.720 24702.031 - 24802.855: 99.5149% ( 4) 00:07:21.720 24802.855 - 24903.680: 99.5375% ( 4) 00:07:21.720 24903.680 - 25004.505: 99.5657% ( 5) 00:07:21.720 25004.505 - 25105.329: 99.5882% ( 4) 00:07:21.720 25105.329 - 25206.154: 99.6108% ( 4) 00:07:21.720 25206.154 - 25306.978: 99.6333% ( 4) 00:07:21.720 25306.978 - 25407.803: 99.6390% ( 1) 00:07:21.720 28634.191 - 28835.840: 99.6503% ( 2) 00:07:21.720 28835.840 - 29037.489: 99.6954% ( 8) 00:07:21.720 29037.489 - 29239.138: 99.7405% ( 8) 00:07:21.720 29239.138 - 29440.788: 99.7913% ( 9) 00:07:21.720 29440.788 - 29642.437: 99.8364% ( 8) 00:07:21.720 29642.437 - 29844.086: 99.8815% ( 8) 00:07:21.720 29844.086 - 30045.735: 99.9267% ( 8) 00:07:21.720 30045.735 - 30247.385: 99.9774% ( 9) 00:07:21.720 30247.385 - 30449.034: 100.0000% ( 4) 00:07:21.720 00:07:21.720 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:21.720 ============================================================================== 00:07:21.720 Range in us Cumulative IO count 00:07:21.720 5545.354 - 5570.560: 0.0056% ( 1) 00:07:21.720 5721.797 - 5747.003: 0.0113% ( 1) 00:07:21.720 5747.003 - 5772.209: 0.0169% ( 1) 00:07:21.720 5797.415 - 5822.622: 0.0226% ( 1) 00:07:21.720 5822.622 - 5847.828: 0.0338% ( 2) 00:07:21.720 5847.828 - 5873.034: 0.0508% ( 3) 00:07:21.720 5873.034 - 5898.240: 0.0790% ( 5) 00:07:21.720 5898.240 - 5923.446: 0.1185% ( 7) 00:07:21.720 5923.446 - 5948.652: 0.1861% ( 12) 00:07:21.720 5948.652 - 5973.858: 0.2426% ( 10) 00:07:21.720 5973.858 - 5999.065: 0.3102% ( 12) 00:07:21.720 5999.065 - 6024.271: 0.4343% ( 22) 00:07:21.720 6024.271 - 6049.477: 0.5923% ( 28) 00:07:21.720 6049.477 - 6074.683: 0.8630% ( 48) 00:07:21.720 6074.683 - 6099.889: 1.1846% ( 57) 00:07:21.720 6099.889 - 6125.095: 1.5117% ( 58) 00:07:21.720 6125.095 - 6150.302: 1.8784% ( 65) 00:07:21.720 6150.302 - 6175.508: 2.5722% ( 123) 00:07:21.720 6175.508 - 6200.714: 3.1024% ( 94) 00:07:21.720 6200.714 - 6225.920: 4.0839% ( 174) 00:07:21.720 6225.920 - 6251.126: 5.0654% ( 174) 00:07:21.720 6251.126 - 6276.332: 6.3628% ( 230) 00:07:21.720 6276.332 - 6301.538: 7.5812% ( 216) 00:07:21.720 6301.538 - 6326.745: 9.2678% ( 299) 00:07:21.720 6326.745 - 6351.951: 10.4637% ( 212) 00:07:21.720 6351.951 - 6377.157: 11.6764% ( 215) 00:07:21.720 6377.157 - 6402.363: 12.6861% ( 179) 00:07:21.720 6402.363 - 6427.569: 13.9497% ( 224) 00:07:21.720 6427.569 - 6452.775: 14.9707% ( 181) 00:07:21.720 6452.775 - 6503.188: 18.4510% ( 617) 00:07:21.720 6503.188 - 6553.600: 22.5068% ( 719) 00:07:21.720 6553.600 - 6604.012: 27.5440% ( 893) 00:07:21.720 6604.012 - 6654.425: 32.5981% ( 896) 00:07:21.720 6654.425 - 6704.837: 37.7820% ( 919) 00:07:21.720 6704.837 - 6755.249: 43.6936% ( 1048) 00:07:21.720 6755.249 - 6805.662: 49.0636% ( 952) 00:07:21.720 6805.662 - 6856.074: 53.9316% ( 863) 00:07:21.720 6856.074 - 6906.486: 58.5514% ( 819) 00:07:21.720 6906.486 - 6956.898: 63.0077% ( 790) 00:07:21.720 6956.898 - 7007.311: 66.5050% ( 620) 00:07:21.720 7007.311 - 7057.723: 69.1787% ( 474) 00:07:21.720 7057.723 - 7108.135: 71.0289% ( 328) 00:07:21.720 7108.135 - 7158.548: 72.9016% ( 332) 00:07:21.720 7158.548 - 7208.960: 74.2498% ( 239) 00:07:21.720 7208.960 - 7259.372: 75.4231% ( 208) 00:07:21.720 7259.372 - 7309.785: 76.3707% ( 168) 00:07:21.720 7309.785 - 7360.197: 77.3071% ( 166) 00:07:21.720 7360.197 - 7410.609: 77.9106% ( 107) 00:07:21.720 7410.609 - 7461.022: 78.5142% ( 107) 00:07:21.720 7461.022 - 7511.434: 79.2983% ( 139) 00:07:21.720 7511.434 - 7561.846: 80.0880% ( 140) 00:07:21.720 7561.846 - 7612.258: 80.5223% ( 77) 00:07:21.720 7612.258 - 7662.671: 81.2444% ( 128) 00:07:21.720 7662.671 - 7713.083: 81.5377% ( 52) 00:07:21.720 7713.083 - 7763.495: 81.8141% ( 49) 00:07:21.720 7763.495 - 7813.908: 82.0454% ( 41) 00:07:21.720 7813.908 - 7864.320: 82.3556% ( 55) 00:07:21.720 7864.320 - 7914.732: 82.5248% ( 30) 00:07:21.720 7914.732 - 7965.145: 82.6715% ( 26) 00:07:21.720 7965.145 - 8015.557: 82.9986% ( 58) 00:07:21.720 8015.557 - 8065.969: 83.4443% ( 79) 00:07:21.720 8065.969 - 8116.382: 84.1324% ( 122) 00:07:21.720 8116.382 - 8166.794: 84.7811% ( 115) 00:07:21.720 8166.794 - 8217.206: 85.5144% ( 130) 00:07:21.720 8217.206 - 8267.618: 86.3436% ( 147) 00:07:21.720 8267.618 - 8318.031: 86.9867% ( 114) 00:07:21.720 8318.031 - 8368.443: 87.9061% ( 163) 00:07:21.720 8368.443 - 8418.855: 88.8087% ( 160) 00:07:21.720 8418.855 - 8469.268: 89.5476% ( 131) 00:07:21.720 8469.268 - 8519.680: 90.5630% ( 180) 00:07:21.720 8519.680 - 8570.092: 91.2116% ( 115) 00:07:21.720 8570.092 - 8620.505: 92.0014% ( 140) 00:07:21.720 8620.505 - 8670.917: 92.6952% ( 123) 00:07:21.720 8670.917 - 8721.329: 93.2987% ( 107) 00:07:21.720 8721.329 - 8771.742: 93.8628% ( 100) 00:07:21.720 8771.742 - 8822.154: 94.3536% ( 87) 00:07:21.720 8822.154 - 8872.566: 94.8161% ( 82) 00:07:21.720 8872.566 - 8922.978: 95.1771% ( 64) 00:07:21.720 8922.978 - 8973.391: 95.6115% ( 77) 00:07:21.720 8973.391 - 9023.803: 95.9217% ( 55) 00:07:21.720 9023.803 - 9074.215: 96.1473% ( 40) 00:07:21.720 9074.215 - 9124.628: 96.3504% ( 36) 00:07:21.720 9124.628 - 9175.040: 96.5422% ( 34) 00:07:21.720 9175.040 - 9225.452: 96.7058% ( 29) 00:07:21.720 9225.452 - 9275.865: 97.1288% ( 75) 00:07:21.720 9275.865 - 9326.277: 97.3093% ( 32) 00:07:21.720 9326.277 - 9376.689: 97.4673% ( 28) 00:07:21.720 9376.689 - 9427.102: 97.6139% ( 26) 00:07:21.720 9427.102 - 9477.514: 97.7606% ( 26) 00:07:21.720 9477.514 - 9527.926: 97.9073% ( 26) 00:07:21.721 9527.926 - 9578.338: 98.0708% ( 29) 00:07:21.721 9578.338 - 9628.751: 98.1949% ( 22) 00:07:21.721 9628.751 - 9679.163: 98.3021% ( 19) 00:07:21.721 9679.163 - 9729.575: 98.4149% ( 20) 00:07:21.721 9729.575 - 9779.988: 98.5390% ( 22) 00:07:21.721 9779.988 - 9830.400: 98.5898% ( 9) 00:07:21.721 9830.400 - 9880.812: 98.6293% ( 7) 00:07:21.721 9880.812 - 9931.225: 98.6688% ( 7) 00:07:21.721 9931.225 - 9981.637: 98.7083% ( 7) 00:07:21.721 9981.637 - 10032.049: 98.7590% ( 9) 00:07:21.721 10032.049 - 10082.462: 98.7929% ( 6) 00:07:21.721 10082.462 - 10132.874: 98.8549% ( 11) 00:07:21.721 10132.874 - 10183.286: 98.9113% ( 10) 00:07:21.721 10183.286 - 10233.698: 98.9621% ( 9) 00:07:21.721 10233.698 - 10284.111: 99.0016% ( 7) 00:07:21.721 10284.111 - 10334.523: 99.0129% ( 2) 00:07:21.721 10334.523 - 10384.935: 99.0298% ( 3) 00:07:21.721 10384.935 - 10435.348: 99.0467% ( 3) 00:07:21.721 10435.348 - 10485.760: 99.0636% ( 3) 00:07:21.721 10485.760 - 10536.172: 99.0806% ( 3) 00:07:21.721 10536.172 - 10586.585: 99.1031% ( 4) 00:07:21.721 10586.585 - 10636.997: 99.1313% ( 5) 00:07:21.721 10636.997 - 10687.409: 99.1539% ( 4) 00:07:21.721 10687.409 - 10737.822: 99.1595% ( 1) 00:07:21.721 10737.822 - 10788.234: 99.1708% ( 2) 00:07:21.721 10788.234 - 10838.646: 99.1764% ( 1) 00:07:21.721 10838.646 - 10889.058: 99.1821% ( 1) 00:07:21.721 10889.058 - 10939.471: 99.1934% ( 2) 00:07:21.721 10939.471 - 10989.883: 99.2046% ( 2) 00:07:21.721 10989.883 - 11040.295: 99.2159% ( 2) 00:07:21.721 11040.295 - 11090.708: 99.2216% ( 1) 00:07:21.721 11090.708 - 11141.120: 99.2329% ( 2) 00:07:21.721 11141.120 - 11191.532: 99.2441% ( 2) 00:07:21.721 11191.532 - 11241.945: 99.2554% ( 2) 00:07:21.721 11241.945 - 11292.357: 99.2723% ( 3) 00:07:21.721 11292.357 - 11342.769: 99.2780% ( 1) 00:07:21.721 22786.363 - 22887.188: 99.2949% ( 3) 00:07:21.721 22887.188 - 22988.012: 99.3175% ( 4) 00:07:21.721 22988.012 - 23088.837: 99.3400% ( 4) 00:07:21.721 23088.837 - 23189.662: 99.3626% ( 4) 00:07:21.721 23189.662 - 23290.486: 99.3852% ( 4) 00:07:21.721 23290.486 - 23391.311: 99.4077% ( 4) 00:07:21.721 23391.311 - 23492.135: 99.4359% ( 5) 00:07:21.721 23492.135 - 23592.960: 99.4528% ( 3) 00:07:21.721 23592.960 - 23693.785: 99.4754% ( 4) 00:07:21.721 23693.785 - 23794.609: 99.4923% ( 3) 00:07:21.721 23794.609 - 23895.434: 99.5149% ( 4) 00:07:21.721 23895.434 - 23996.258: 99.5375% ( 4) 00:07:21.721 23996.258 - 24097.083: 99.5600% ( 4) 00:07:21.721 24097.083 - 24197.908: 99.5882% ( 5) 00:07:21.721 24197.908 - 24298.732: 99.6108% ( 4) 00:07:21.721 24298.732 - 24399.557: 99.6333% ( 4) 00:07:21.721 24399.557 - 24500.382: 99.6390% ( 1) 00:07:21.721 27222.646 - 27424.295: 99.6672% ( 5) 00:07:21.721 27424.295 - 27625.945: 99.7123% ( 8) 00:07:21.721 27625.945 - 27827.594: 99.7518% ( 7) 00:07:21.721 27827.594 - 28029.243: 99.7913% ( 7) 00:07:21.721 28029.243 - 28230.892: 99.8364% ( 8) 00:07:21.721 28230.892 - 28432.542: 99.8815% ( 8) 00:07:21.721 28432.542 - 28634.191: 99.9210% ( 7) 00:07:21.721 28634.191 - 28835.840: 99.9662% ( 8) 00:07:21.721 28835.840 - 29037.489: 100.0000% ( 6) 00:07:21.721 00:07:21.721 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:21.721 ============================================================================== 00:07:21.721 Range in us Cumulative IO count 00:07:21.721 5721.797 - 5747.003: 0.0056% ( 1) 00:07:21.721 5797.415 - 5822.622: 0.0113% ( 1) 00:07:21.721 5822.622 - 5847.828: 0.0169% ( 1) 00:07:21.721 5847.828 - 5873.034: 0.0226% ( 1) 00:07:21.721 5873.034 - 5898.240: 0.0451% ( 4) 00:07:21.721 5898.240 - 5923.446: 0.0677% ( 4) 00:07:21.721 5923.446 - 5948.652: 0.1072% ( 7) 00:07:21.721 5948.652 - 5973.858: 0.1692% ( 11) 00:07:21.721 5973.858 - 5999.065: 0.2369% ( 12) 00:07:21.721 5999.065 - 6024.271: 0.3159% ( 14) 00:07:21.721 6024.271 - 6049.477: 0.6092% ( 52) 00:07:21.721 6049.477 - 6074.683: 0.8179% ( 37) 00:07:21.721 6074.683 - 6099.889: 1.0548% ( 42) 00:07:21.721 6099.889 - 6125.095: 1.5005% ( 79) 00:07:21.721 6125.095 - 6150.302: 1.9743% ( 84) 00:07:21.721 6150.302 - 6175.508: 2.3466% ( 66) 00:07:21.721 6175.508 - 6200.714: 3.0460% ( 124) 00:07:21.721 6200.714 - 6225.920: 3.8752% ( 147) 00:07:21.721 6225.920 - 6251.126: 4.7608% ( 157) 00:07:21.721 6251.126 - 6276.332: 5.9003% ( 202) 00:07:21.721 6276.332 - 6301.538: 7.3725% ( 261) 00:07:21.721 6301.538 - 6326.745: 8.6135% ( 220) 00:07:21.721 6326.745 - 6351.951: 10.0801% ( 260) 00:07:21.721 6351.951 - 6377.157: 11.4000% ( 234) 00:07:21.721 6377.157 - 6402.363: 12.3985% ( 177) 00:07:21.721 6402.363 - 6427.569: 13.2897% ( 158) 00:07:21.721 6427.569 - 6452.775: 14.4122% ( 199) 00:07:21.721 6452.775 - 6503.188: 17.9377% ( 625) 00:07:21.721 6503.188 - 6553.600: 21.7565% ( 677) 00:07:21.721 6553.600 - 6604.012: 26.2635% ( 799) 00:07:21.721 6604.012 - 6654.425: 31.3910% ( 909) 00:07:21.721 6654.425 - 6704.837: 36.9246% ( 981) 00:07:21.721 6704.837 - 6755.249: 42.3285% ( 958) 00:07:21.721 6755.249 - 6805.662: 48.6970% ( 1129) 00:07:21.721 6805.662 - 6856.074: 54.6875% ( 1062) 00:07:21.721 6856.074 - 6906.486: 59.6458% ( 879) 00:07:21.721 6906.486 - 6956.898: 64.5081% ( 862) 00:07:21.721 6956.898 - 7007.311: 67.7065% ( 567) 00:07:21.721 7007.311 - 7057.723: 70.1546% ( 434) 00:07:21.721 7057.723 - 7108.135: 71.8242% ( 296) 00:07:21.721 7108.135 - 7158.548: 73.1498% ( 235) 00:07:21.721 7158.548 - 7208.960: 74.3513% ( 213) 00:07:21.721 7208.960 - 7259.372: 75.2369% ( 157) 00:07:21.721 7259.372 - 7309.785: 76.0830% ( 150) 00:07:21.721 7309.785 - 7360.197: 76.8840% ( 142) 00:07:21.721 7360.197 - 7410.609: 77.8655% ( 174) 00:07:21.721 7410.609 - 7461.022: 78.7398% ( 155) 00:07:21.721 7461.022 - 7511.434: 79.4562% ( 127) 00:07:21.721 7511.434 - 7561.846: 80.2177% ( 135) 00:07:21.721 7561.846 - 7612.258: 80.6352% ( 74) 00:07:21.721 7612.258 - 7662.671: 81.1710% ( 95) 00:07:21.721 7662.671 - 7713.083: 81.5433% ( 66) 00:07:21.721 7713.083 - 7763.495: 81.7690% ( 40) 00:07:21.721 7763.495 - 7813.908: 81.9664% ( 35) 00:07:21.721 7813.908 - 7864.320: 82.1300% ( 29) 00:07:21.721 7864.320 - 7914.732: 82.4571% ( 58) 00:07:21.721 7914.732 - 7965.145: 82.9253% ( 83) 00:07:21.721 7965.145 - 8015.557: 83.2468% ( 57) 00:07:21.721 8015.557 - 8065.969: 83.6981% ( 80) 00:07:21.721 8065.969 - 8116.382: 84.2453% ( 97) 00:07:21.721 8116.382 - 8166.794: 84.7811% ( 95) 00:07:21.721 8166.794 - 8217.206: 85.4637% ( 121) 00:07:21.721 8217.206 - 8267.618: 86.2647% ( 142) 00:07:21.721 8267.618 - 8318.031: 87.2067% ( 167) 00:07:21.721 8318.031 - 8368.443: 87.9343% ( 129) 00:07:21.721 8368.443 - 8418.855: 88.6902% ( 134) 00:07:21.721 8418.855 - 8469.268: 89.9650% ( 226) 00:07:21.721 8469.268 - 8519.680: 90.6645% ( 124) 00:07:21.721 8519.680 - 8570.092: 91.3752% ( 126) 00:07:21.721 8570.092 - 8620.505: 92.1537% ( 138) 00:07:21.721 8620.505 - 8670.917: 92.8757% ( 128) 00:07:21.721 8670.917 - 8721.329: 93.5413% ( 118) 00:07:21.721 8721.329 - 8771.742: 94.0208% ( 85) 00:07:21.721 8771.742 - 8822.154: 94.5228% ( 89) 00:07:21.721 8822.154 - 8872.566: 95.0192% ( 88) 00:07:21.721 8872.566 - 8922.978: 95.3294% ( 55) 00:07:21.721 8922.978 - 8973.391: 95.5945% ( 47) 00:07:21.721 8973.391 - 9023.803: 95.8145% ( 39) 00:07:21.721 9023.803 - 9074.215: 96.0796% ( 47) 00:07:21.721 9074.215 - 9124.628: 96.3560% ( 49) 00:07:21.721 9124.628 - 9175.040: 96.6606% ( 54) 00:07:21.721 9175.040 - 9225.452: 96.8637% ( 36) 00:07:21.721 9225.452 - 9275.865: 97.0386% ( 31) 00:07:21.721 9275.865 - 9326.277: 97.1796% ( 25) 00:07:21.721 9326.277 - 9376.689: 97.3432% ( 29) 00:07:21.721 9376.689 - 9427.102: 97.5181% ( 31) 00:07:21.721 9427.102 - 9477.514: 97.6421% ( 22) 00:07:21.721 9477.514 - 9527.926: 97.7380% ( 17) 00:07:21.721 9527.926 - 9578.338: 97.8283% ( 16) 00:07:21.721 9578.338 - 9628.751: 97.9129% ( 15) 00:07:21.721 9628.751 - 9679.163: 98.0426% ( 23) 00:07:21.721 9679.163 - 9729.575: 98.1893% ( 26) 00:07:21.721 9729.575 - 9779.988: 98.2796% ( 16) 00:07:21.721 9779.988 - 9830.400: 98.3416% ( 11) 00:07:21.721 9830.400 - 9880.812: 98.5785% ( 42) 00:07:21.721 9880.812 - 9931.225: 98.6349% ( 10) 00:07:21.721 9931.225 - 9981.637: 98.6857% ( 9) 00:07:21.721 9981.637 - 10032.049: 98.7365% ( 9) 00:07:21.721 10032.049 - 10082.462: 98.7816% ( 8) 00:07:21.721 10082.462 - 10132.874: 98.8154% ( 6) 00:07:21.721 10132.874 - 10183.286: 98.8380% ( 4) 00:07:21.721 10183.286 - 10233.698: 98.8662% ( 5) 00:07:21.721 10233.698 - 10284.111: 98.8831% ( 3) 00:07:21.721 10284.111 - 10334.523: 98.9057% ( 4) 00:07:21.721 10334.523 - 10384.935: 98.9734% ( 12) 00:07:21.721 10384.935 - 10435.348: 99.0016% ( 5) 00:07:21.721 10435.348 - 10485.760: 99.0298% ( 5) 00:07:21.721 10485.760 - 10536.172: 99.0580% ( 5) 00:07:21.721 10536.172 - 10586.585: 99.0862% ( 5) 00:07:21.721 10586.585 - 10636.997: 99.1257% ( 7) 00:07:21.721 10636.997 - 10687.409: 99.1426% ( 3) 00:07:21.721 10687.409 - 10737.822: 99.1595% ( 3) 00:07:21.721 10737.822 - 10788.234: 99.1708% ( 2) 00:07:21.721 10788.234 - 10838.646: 99.1877% ( 3) 00:07:21.721 10838.646 - 10889.058: 99.1990% ( 2) 00:07:21.721 10889.058 - 10939.471: 99.2159% ( 3) 00:07:21.721 10939.471 - 10989.883: 99.2272% ( 2) 00:07:21.721 10989.883 - 11040.295: 99.2385% ( 2) 00:07:21.722 11040.295 - 11090.708: 99.2611% ( 4) 00:07:21.722 11090.708 - 11141.120: 99.2780% ( 3) 00:07:21.722 20971.520 - 21072.345: 99.2836% ( 1) 00:07:21.722 21072.345 - 21173.169: 99.3062% ( 4) 00:07:21.722 21173.169 - 21273.994: 99.3287% ( 4) 00:07:21.722 21273.994 - 21374.818: 99.3513% ( 4) 00:07:21.722 21374.818 - 21475.643: 99.3739% ( 4) 00:07:21.722 21475.643 - 21576.468: 99.4021% ( 5) 00:07:21.722 21576.468 - 21677.292: 99.4246% ( 4) 00:07:21.722 21677.292 - 21778.117: 99.4472% ( 4) 00:07:21.722 21778.117 - 21878.942: 99.4698% ( 4) 00:07:21.722 21878.942 - 21979.766: 99.4923% ( 4) 00:07:21.722 21979.766 - 22080.591: 99.5149% ( 4) 00:07:21.722 22080.591 - 22181.415: 99.5375% ( 4) 00:07:21.722 22181.415 - 22282.240: 99.5600% ( 4) 00:07:21.722 22282.240 - 22383.065: 99.5826% ( 4) 00:07:21.722 22383.065 - 22483.889: 99.6051% ( 4) 00:07:21.722 22483.889 - 22584.714: 99.6333% ( 5) 00:07:21.722 22584.714 - 22685.538: 99.6390% ( 1) 00:07:21.722 25508.628 - 25609.452: 99.6559% ( 3) 00:07:21.722 25609.452 - 25710.277: 99.6841% ( 5) 00:07:21.722 25710.277 - 25811.102: 99.7067% ( 4) 00:07:21.722 25811.102 - 26012.751: 99.7518% ( 8) 00:07:21.722 26012.751 - 26214.400: 99.8026% ( 9) 00:07:21.722 26214.400 - 26416.049: 99.8421% ( 7) 00:07:21.722 26416.049 - 26617.698: 99.8872% ( 8) 00:07:21.722 26617.698 - 26819.348: 99.9323% ( 8) 00:07:21.722 26819.348 - 27020.997: 99.9774% ( 8) 00:07:21.722 27020.997 - 27222.646: 100.0000% ( 4) 00:07:21.722 00:07:21.722 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:21.722 ============================================================================== 00:07:21.722 Range in us Cumulative IO count 00:07:21.722 5797.415 - 5822.622: 0.0056% ( 1) 00:07:21.722 5847.828 - 5873.034: 0.0169% ( 2) 00:07:21.722 5873.034 - 5898.240: 0.0564% ( 7) 00:07:21.722 5898.240 - 5923.446: 0.0846% ( 5) 00:07:21.722 5923.446 - 5948.652: 0.1467% ( 11) 00:07:21.722 5948.652 - 5973.858: 0.2369% ( 16) 00:07:21.722 5973.858 - 5999.065: 0.3215% ( 15) 00:07:21.722 5999.065 - 6024.271: 0.4513% ( 23) 00:07:21.722 6024.271 - 6049.477: 0.5697% ( 21) 00:07:21.722 6049.477 - 6074.683: 0.9477% ( 67) 00:07:21.722 6074.683 - 6099.889: 1.2748% ( 58) 00:07:21.722 6099.889 - 6125.095: 1.7938% ( 92) 00:07:21.722 6125.095 - 6150.302: 2.1040% ( 55) 00:07:21.722 6150.302 - 6175.508: 2.4989% ( 70) 00:07:21.722 6175.508 - 6200.714: 3.2096% ( 126) 00:07:21.722 6200.714 - 6225.920: 3.9373% ( 129) 00:07:21.722 6225.920 - 6251.126: 4.7383% ( 142) 00:07:21.722 6251.126 - 6276.332: 5.7198% ( 174) 00:07:21.722 6276.332 - 6301.538: 6.8141% ( 194) 00:07:21.722 6301.538 - 6326.745: 8.6191% ( 320) 00:07:21.722 6326.745 - 6351.951: 9.5894% ( 172) 00:07:21.722 6351.951 - 6377.157: 10.8021% ( 215) 00:07:21.722 6377.157 - 6402.363: 12.0149% ( 215) 00:07:21.722 6402.363 - 6427.569: 13.2897% ( 226) 00:07:21.722 6427.569 - 6452.775: 14.6266% ( 237) 00:07:21.722 6452.775 - 6503.188: 18.0449% ( 606) 00:07:21.722 6503.188 - 6553.600: 21.4407% ( 602) 00:07:21.722 6553.600 - 6604.012: 25.6036% ( 738) 00:07:21.722 6604.012 - 6654.425: 30.1613% ( 808) 00:07:21.722 6654.425 - 6704.837: 36.4339% ( 1112) 00:07:21.722 6704.837 - 6755.249: 42.0634% ( 998) 00:07:21.722 6755.249 - 6805.662: 48.4206% ( 1127) 00:07:21.722 6805.662 - 6856.074: 55.0993% ( 1184) 00:07:21.722 6856.074 - 6906.486: 59.8150% ( 836) 00:07:21.722 6906.486 - 6956.898: 64.4461% ( 821) 00:07:21.722 6956.898 - 7007.311: 67.4075% ( 525) 00:07:21.722 7007.311 - 7057.723: 69.8838% ( 439) 00:07:21.722 7057.723 - 7108.135: 71.4125% ( 271) 00:07:21.722 7108.135 - 7158.548: 72.7493% ( 237) 00:07:21.722 7158.548 - 7208.960: 73.8775% ( 200) 00:07:21.722 7208.960 - 7259.372: 74.9549% ( 191) 00:07:21.722 7259.372 - 7309.785: 75.8066% ( 151) 00:07:21.722 7309.785 - 7360.197: 76.8502% ( 185) 00:07:21.722 7360.197 - 7410.609: 77.7132% ( 153) 00:07:21.722 7410.609 - 7461.022: 78.3506% ( 113) 00:07:21.722 7461.022 - 7511.434: 79.0275% ( 120) 00:07:21.722 7511.434 - 7561.846: 79.6706% ( 114) 00:07:21.722 7561.846 - 7612.258: 80.2065% ( 95) 00:07:21.722 7612.258 - 7662.671: 80.6013% ( 70) 00:07:21.722 7662.671 - 7713.083: 81.1485% ( 97) 00:07:21.722 7713.083 - 7763.495: 81.7125% ( 100) 00:07:21.722 7763.495 - 7813.908: 82.1582% ( 79) 00:07:21.722 7813.908 - 7864.320: 82.4628% ( 54) 00:07:21.722 7864.320 - 7914.732: 82.7561% ( 52) 00:07:21.722 7914.732 - 7965.145: 83.0325% ( 49) 00:07:21.722 7965.145 - 8015.557: 83.3145% ( 50) 00:07:21.722 8015.557 - 8065.969: 83.7207% ( 72) 00:07:21.722 8065.969 - 8116.382: 84.4145% ( 123) 00:07:21.722 8116.382 - 8166.794: 84.9898% ( 102) 00:07:21.722 8166.794 - 8217.206: 85.6724% ( 121) 00:07:21.722 8217.206 - 8267.618: 86.5129% ( 149) 00:07:21.722 8267.618 - 8318.031: 87.2856% ( 137) 00:07:21.722 8318.031 - 8368.443: 88.0810% ( 141) 00:07:21.722 8368.443 - 8418.855: 88.9271% ( 150) 00:07:21.722 8418.855 - 8469.268: 89.7789% ( 151) 00:07:21.722 8469.268 - 8519.680: 90.8055% ( 182) 00:07:21.722 8519.680 - 8570.092: 91.5388% ( 130) 00:07:21.722 8570.092 - 8620.505: 92.4244% ( 157) 00:07:21.722 8620.505 - 8670.917: 93.1916% ( 136) 00:07:21.722 8670.917 - 8721.329: 93.6428% ( 80) 00:07:21.722 8721.329 - 8771.742: 94.2633% ( 110) 00:07:21.722 8771.742 - 8822.154: 94.6525% ( 69) 00:07:21.722 8822.154 - 8872.566: 94.9458% ( 52) 00:07:21.722 8872.566 - 8922.978: 95.2561% ( 55) 00:07:21.722 8922.978 - 8973.391: 95.5720% ( 56) 00:07:21.722 8973.391 - 9023.803: 95.7863% ( 38) 00:07:21.722 9023.803 - 9074.215: 96.0007% ( 38) 00:07:21.722 9074.215 - 9124.628: 96.2094% ( 37) 00:07:21.722 9124.628 - 9175.040: 96.4858% ( 49) 00:07:21.722 9175.040 - 9225.452: 96.8130% ( 58) 00:07:21.722 9225.452 - 9275.865: 97.0160% ( 36) 00:07:21.722 9275.865 - 9326.277: 97.2529% ( 42) 00:07:21.722 9326.277 - 9376.689: 97.4560% ( 36) 00:07:21.722 9376.689 - 9427.102: 97.6252% ( 30) 00:07:21.722 9427.102 - 9477.514: 97.8170% ( 34) 00:07:21.722 9477.514 - 9527.926: 97.9750% ( 28) 00:07:21.722 9527.926 - 9578.338: 98.0708% ( 17) 00:07:21.722 9578.338 - 9628.751: 98.1442% ( 13) 00:07:21.722 9628.751 - 9679.163: 98.2006% ( 10) 00:07:21.722 9679.163 - 9729.575: 98.3190% ( 21) 00:07:21.722 9729.575 - 9779.988: 98.4206% ( 18) 00:07:21.722 9779.988 - 9830.400: 98.5108% ( 16) 00:07:21.722 9830.400 - 9880.812: 98.5898% ( 14) 00:07:21.722 9880.812 - 9931.225: 98.6349% ( 8) 00:07:21.722 9931.225 - 9981.637: 98.6744% ( 7) 00:07:21.722 9981.637 - 10032.049: 98.7139% ( 7) 00:07:21.722 10032.049 - 10082.462: 98.7421% ( 5) 00:07:21.722 10082.462 - 10132.874: 98.7872% ( 8) 00:07:21.722 10132.874 - 10183.286: 98.8606% ( 13) 00:07:21.722 10183.286 - 10233.698: 98.9226% ( 11) 00:07:21.722 10233.698 - 10284.111: 98.9677% ( 8) 00:07:21.722 10284.111 - 10334.523: 99.0185% ( 9) 00:07:21.722 10334.523 - 10384.935: 99.0523% ( 6) 00:07:21.722 10384.935 - 10435.348: 99.0918% ( 7) 00:07:21.722 10435.348 - 10485.760: 99.1144% ( 4) 00:07:21.722 10485.760 - 10536.172: 99.1257% ( 2) 00:07:21.722 10536.172 - 10586.585: 99.1426% ( 3) 00:07:21.722 10586.585 - 10636.997: 99.1595% ( 3) 00:07:21.722 10636.997 - 10687.409: 99.1821% ( 4) 00:07:21.722 10687.409 - 10737.822: 99.2046% ( 4) 00:07:21.722 10737.822 - 10788.234: 99.2216% ( 3) 00:07:21.722 10788.234 - 10838.646: 99.2441% ( 4) 00:07:21.722 10838.646 - 10889.058: 99.2667% ( 4) 00:07:21.722 10889.058 - 10939.471: 99.2780% ( 2) 00:07:21.722 19257.502 - 19358.326: 99.2949% ( 3) 00:07:21.722 19358.326 - 19459.151: 99.3175% ( 4) 00:07:21.722 19459.151 - 19559.975: 99.3400% ( 4) 00:07:21.722 19559.975 - 19660.800: 99.3626% ( 4) 00:07:21.722 19660.800 - 19761.625: 99.3852% ( 4) 00:07:21.722 19761.625 - 19862.449: 99.4134% ( 5) 00:07:21.722 19862.449 - 19963.274: 99.4359% ( 4) 00:07:21.722 19963.274 - 20064.098: 99.4585% ( 4) 00:07:21.722 20064.098 - 20164.923: 99.4810% ( 4) 00:07:21.722 20164.923 - 20265.748: 99.5093% ( 5) 00:07:21.722 20265.748 - 20366.572: 99.5262% ( 3) 00:07:21.722 20366.572 - 20467.397: 99.5487% ( 4) 00:07:21.722 20467.397 - 20568.222: 99.5713% ( 4) 00:07:21.722 20568.222 - 20669.046: 99.5939% ( 4) 00:07:21.722 20669.046 - 20769.871: 99.6164% ( 4) 00:07:21.722 20769.871 - 20870.695: 99.6390% ( 4) 00:07:21.722 23794.609 - 23895.434: 99.6616% ( 4) 00:07:21.722 23895.434 - 23996.258: 99.6898% ( 5) 00:07:21.722 23996.258 - 24097.083: 99.7123% ( 4) 00:07:21.722 24097.083 - 24197.908: 99.7292% ( 3) 00:07:21.722 24197.908 - 24298.732: 99.7574% ( 5) 00:07:21.722 24298.732 - 24399.557: 99.7800% ( 4) 00:07:21.722 24399.557 - 24500.382: 99.8026% ( 4) 00:07:21.722 24500.382 - 24601.206: 99.8251% ( 4) 00:07:21.722 24601.206 - 24702.031: 99.8477% ( 4) 00:07:21.722 24702.031 - 24802.855: 99.8703% ( 4) 00:07:21.722 24802.855 - 24903.680: 99.8872% ( 3) 00:07:21.722 24903.680 - 25004.505: 99.9097% ( 4) 00:07:21.722 25004.505 - 25105.329: 99.9323% ( 4) 00:07:21.722 25105.329 - 25206.154: 99.9549% ( 4) 00:07:21.722 25206.154 - 25306.978: 99.9774% ( 4) 00:07:21.722 25306.978 - 25407.803: 100.0000% ( 4) 00:07:21.722 00:07:21.722 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:21.722 ============================================================================== 00:07:21.722 Range in us Cumulative IO count 00:07:21.722 5646.178 - 5671.385: 0.0056% ( 1) 00:07:21.722 5671.385 - 5696.591: 0.0112% ( 1) 00:07:21.722 5822.622 - 5847.828: 0.0169% ( 1) 00:07:21.722 5847.828 - 5873.034: 0.0337% ( 3) 00:07:21.722 5873.034 - 5898.240: 0.0618% ( 5) 00:07:21.722 5898.240 - 5923.446: 0.1180% ( 10) 00:07:21.723 5923.446 - 5948.652: 0.1742% ( 10) 00:07:21.723 5948.652 - 5973.858: 0.2698% ( 17) 00:07:21.723 5973.858 - 5999.065: 0.3653% ( 17) 00:07:21.723 5999.065 - 6024.271: 0.5283% ( 29) 00:07:21.723 6024.271 - 6049.477: 0.7026% ( 31) 00:07:21.723 6049.477 - 6074.683: 0.9161% ( 38) 00:07:21.723 6074.683 - 6099.889: 1.2702% ( 63) 00:07:21.723 6099.889 - 6125.095: 1.9278% ( 117) 00:07:21.723 6125.095 - 6150.302: 2.2707% ( 61) 00:07:21.723 6150.302 - 6175.508: 2.6079% ( 60) 00:07:21.723 6175.508 - 6200.714: 3.2430% ( 113) 00:07:21.723 6200.714 - 6225.920: 3.8388% ( 106) 00:07:21.723 6225.920 - 6251.126: 4.4795% ( 114) 00:07:21.723 6251.126 - 6276.332: 5.4294% ( 169) 00:07:21.723 6276.332 - 6301.538: 7.0762% ( 293) 00:07:21.723 6301.538 - 6326.745: 8.3633% ( 229) 00:07:21.723 6326.745 - 6351.951: 9.3132% ( 169) 00:07:21.723 6351.951 - 6377.157: 10.7464% ( 255) 00:07:21.723 6377.157 - 6402.363: 11.7525% ( 179) 00:07:21.723 6402.363 - 6427.569: 13.0227% ( 226) 00:07:21.723 6427.569 - 6452.775: 14.4559% ( 255) 00:07:21.723 6452.775 - 6503.188: 17.8282% ( 600) 00:07:21.723 6503.188 - 6553.600: 20.9982% ( 564) 00:07:21.723 6553.600 - 6604.012: 26.0004% ( 890) 00:07:21.723 6604.012 - 6654.425: 30.3788% ( 779) 00:07:21.723 6654.425 - 6704.837: 35.5946% ( 928) 00:07:21.723 6704.837 - 6755.249: 41.7603% ( 1097) 00:07:21.723 6755.249 - 6805.662: 48.4150% ( 1184) 00:07:21.723 6805.662 - 6856.074: 54.2941% ( 1046) 00:07:21.723 6856.074 - 6906.486: 58.9928% ( 836) 00:07:21.723 6906.486 - 6956.898: 64.0681% ( 903) 00:07:21.723 6956.898 - 7007.311: 67.1369% ( 546) 00:07:21.723 7007.311 - 7057.723: 69.6156% ( 441) 00:07:21.723 7057.723 - 7108.135: 71.2286% ( 287) 00:07:21.723 7108.135 - 7158.548: 72.6956% ( 261) 00:07:21.723 7158.548 - 7208.960: 73.7129% ( 181) 00:07:21.723 7208.960 - 7259.372: 74.4323% ( 128) 00:07:21.723 7259.372 - 7309.785: 75.1911% ( 135) 00:07:21.723 7309.785 - 7360.197: 76.3489% ( 206) 00:07:21.723 7360.197 - 7410.609: 77.3775% ( 183) 00:07:21.723 7410.609 - 7461.022: 78.1924% ( 145) 00:07:21.723 7461.022 - 7511.434: 78.6365% ( 79) 00:07:21.723 7511.434 - 7561.846: 79.1817% ( 97) 00:07:21.723 7561.846 - 7612.258: 79.8168% ( 113) 00:07:21.723 7612.258 - 7662.671: 80.4856% ( 119) 00:07:21.723 7662.671 - 7713.083: 81.1545% ( 119) 00:07:21.723 7713.083 - 7763.495: 81.6378% ( 86) 00:07:21.723 7763.495 - 7813.908: 82.0088% ( 66) 00:07:21.723 7813.908 - 7864.320: 82.6607% ( 116) 00:07:21.723 7864.320 - 7914.732: 83.1048% ( 79) 00:07:21.723 7914.732 - 7965.145: 83.4420% ( 60) 00:07:21.723 7965.145 - 8015.557: 83.6724% ( 41) 00:07:21.723 8015.557 - 8065.969: 83.9928% ( 57) 00:07:21.723 8065.969 - 8116.382: 84.5492% ( 99) 00:07:21.723 8116.382 - 8166.794: 85.0495% ( 89) 00:07:21.723 8166.794 - 8217.206: 85.6846% ( 113) 00:07:21.723 8217.206 - 8267.618: 86.4771% ( 141) 00:07:21.723 8267.618 - 8318.031: 87.2134% ( 131) 00:07:21.723 8318.031 - 8368.443: 87.9384% ( 129) 00:07:21.723 8368.443 - 8418.855: 88.6859% ( 133) 00:07:21.723 8418.855 - 8469.268: 89.5459% ( 153) 00:07:21.723 8469.268 - 8519.680: 90.5126% ( 172) 00:07:21.723 8519.680 - 8570.092: 91.4006% ( 158) 00:07:21.723 8570.092 - 8620.505: 92.3786% ( 174) 00:07:21.723 8620.505 - 8670.917: 93.1542% ( 138) 00:07:21.723 8670.917 - 8721.329: 93.7219% ( 101) 00:07:21.723 8721.329 - 8771.742: 94.1659% ( 79) 00:07:21.723 8771.742 - 8822.154: 94.6774% ( 91) 00:07:21.723 8822.154 - 8872.566: 95.0146% ( 60) 00:07:21.723 8872.566 - 8922.978: 95.3462% ( 59) 00:07:21.723 8922.978 - 8973.391: 95.6160% ( 48) 00:07:21.723 8973.391 - 9023.803: 95.9027% ( 51) 00:07:21.723 9023.803 - 9074.215: 96.1724% ( 48) 00:07:21.723 9074.215 - 9124.628: 96.4254% ( 45) 00:07:21.723 9124.628 - 9175.040: 96.7232% ( 53) 00:07:21.723 9175.040 - 9225.452: 96.9537% ( 41) 00:07:21.723 9225.452 - 9275.865: 97.1785% ( 40) 00:07:21.723 9275.865 - 9326.277: 97.3640% ( 33) 00:07:21.723 9326.277 - 9376.689: 97.5326% ( 30) 00:07:21.723 9376.689 - 9427.102: 97.6900% ( 28) 00:07:21.723 9427.102 - 9477.514: 97.8811% ( 34) 00:07:21.723 9477.514 - 9527.926: 98.0047% ( 22) 00:07:21.723 9527.926 - 9578.338: 98.0834% ( 14) 00:07:21.723 9578.338 - 9628.751: 98.1677% ( 15) 00:07:21.723 9628.751 - 9679.163: 98.2576% ( 16) 00:07:21.723 9679.163 - 9729.575: 98.3476% ( 16) 00:07:21.723 9729.575 - 9779.988: 98.4319% ( 15) 00:07:21.723 9779.988 - 9830.400: 98.4881% ( 10) 00:07:21.723 9830.400 - 9880.812: 98.5724% ( 15) 00:07:21.723 9880.812 - 9931.225: 98.6623% ( 16) 00:07:21.723 9931.225 - 9981.637: 98.7410% ( 14) 00:07:21.723 9981.637 - 10032.049: 98.8141% ( 13) 00:07:21.723 10032.049 - 10082.462: 98.8759% ( 11) 00:07:21.723 10082.462 - 10132.874: 98.9152% ( 7) 00:07:21.723 10132.874 - 10183.286: 98.9490% ( 6) 00:07:21.723 10183.286 - 10233.698: 98.9827% ( 6) 00:07:21.723 10233.698 - 10284.111: 99.0164% ( 6) 00:07:21.723 10284.111 - 10334.523: 99.0558% ( 7) 00:07:21.723 10334.523 - 10384.935: 99.0839% ( 5) 00:07:21.723 10384.935 - 10435.348: 99.1232% ( 7) 00:07:21.723 10435.348 - 10485.760: 99.1513% ( 5) 00:07:21.723 10485.760 - 10536.172: 99.1906% ( 7) 00:07:21.723 10536.172 - 10586.585: 99.2131% ( 4) 00:07:21.723 10586.585 - 10636.997: 99.2300% ( 3) 00:07:21.723 10636.997 - 10687.409: 99.2469% ( 3) 00:07:21.723 10687.409 - 10737.822: 99.2637% ( 3) 00:07:21.723 10737.822 - 10788.234: 99.2806% ( 3) 00:07:21.723 13611.323 - 13712.148: 99.2974% ( 3) 00:07:21.723 13712.148 - 13812.972: 99.3199% ( 4) 00:07:21.723 13812.972 - 13913.797: 99.3424% ( 4) 00:07:21.723 13913.797 - 14014.622: 99.3649% ( 4) 00:07:21.723 14014.622 - 14115.446: 99.3874% ( 4) 00:07:21.723 14115.446 - 14216.271: 99.4098% ( 4) 00:07:21.723 14216.271 - 14317.095: 99.4323% ( 4) 00:07:21.723 14317.095 - 14417.920: 99.4548% ( 4) 00:07:21.723 14417.920 - 14518.745: 99.4829% ( 5) 00:07:21.723 14518.745 - 14619.569: 99.5054% ( 4) 00:07:21.723 14619.569 - 14720.394: 99.5279% ( 4) 00:07:21.723 14720.394 - 14821.218: 99.5504% ( 4) 00:07:21.723 14821.218 - 14922.043: 99.5785% ( 5) 00:07:21.723 14922.043 - 15022.868: 99.6009% ( 4) 00:07:21.723 15022.868 - 15123.692: 99.6234% ( 4) 00:07:21.723 15123.692 - 15224.517: 99.6403% ( 3) 00:07:21.723 18551.729 - 18652.554: 99.6515% ( 2) 00:07:21.723 18652.554 - 18753.378: 99.6740% ( 4) 00:07:21.723 18753.378 - 18854.203: 99.6965% ( 4) 00:07:21.723 18854.203 - 18955.028: 99.7190% ( 4) 00:07:21.723 18955.028 - 19055.852: 99.7415% ( 4) 00:07:21.723 19055.852 - 19156.677: 99.7696% ( 5) 00:07:21.723 19156.677 - 19257.502: 99.7920% ( 4) 00:07:21.723 19257.502 - 19358.326: 99.8145% ( 4) 00:07:21.723 19358.326 - 19459.151: 99.8370% ( 4) 00:07:21.723 19459.151 - 19559.975: 99.8595% ( 4) 00:07:21.723 19559.975 - 19660.800: 99.8820% ( 4) 00:07:21.723 19660.800 - 19761.625: 99.9101% ( 5) 00:07:21.723 19761.625 - 19862.449: 99.9326% ( 4) 00:07:21.723 19862.449 - 19963.274: 99.9550% ( 4) 00:07:21.723 19963.274 - 20064.098: 99.9775% ( 4) 00:07:21.723 20064.098 - 20164.923: 100.0000% ( 4) 00:07:21.723 00:07:21.723 09:39:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:21.723 00:07:21.723 real 0m2.494s 00:07:21.723 user 0m2.199s 00:07:21.723 sys 0m0.199s 00:07:21.723 09:39:09 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.723 09:39:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.723 ************************************ 00:07:21.723 END TEST nvme_perf 00:07:21.723 ************************************ 00:07:21.723 09:39:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:21.723 09:39:09 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:21.723 09:39:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.723 09:39:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.723 ************************************ 00:07:21.723 START TEST nvme_hello_world 00:07:21.723 ************************************ 00:07:21.723 09:39:09 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:21.981 Initializing NVMe Controllers 00:07:21.981 Attached to 0000:00:10.0 00:07:21.981 Namespace ID: 1 size: 6GB 00:07:21.981 Attached to 0000:00:11.0 00:07:21.982 Namespace ID: 1 size: 5GB 00:07:21.982 Attached to 0000:00:13.0 00:07:21.982 Namespace ID: 1 size: 1GB 00:07:21.982 Attached to 0000:00:12.0 00:07:21.982 Namespace ID: 1 size: 4GB 00:07:21.982 Namespace ID: 2 size: 4GB 00:07:21.982 Namespace ID: 3 size: 4GB 00:07:21.982 Initialization complete. 00:07:21.982 INFO: using host memory buffer for IO 00:07:21.982 Hello world! 00:07:21.982 INFO: using host memory buffer for IO 00:07:21.982 Hello world! 00:07:21.982 INFO: using host memory buffer for IO 00:07:21.982 Hello world! 00:07:21.982 INFO: using host memory buffer for IO 00:07:21.982 Hello world! 00:07:21.982 INFO: using host memory buffer for IO 00:07:21.982 Hello world! 00:07:21.982 INFO: using host memory buffer for IO 00:07:21.982 Hello world! 00:07:21.982 00:07:21.982 real 0m0.226s 00:07:21.982 user 0m0.077s 00:07:21.982 sys 0m0.100s 00:07:21.982 09:39:09 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.982 09:39:09 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:21.982 ************************************ 00:07:21.982 END TEST nvme_hello_world 00:07:21.982 ************************************ 00:07:21.982 09:39:09 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:21.982 09:39:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.982 09:39:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.982 09:39:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.982 ************************************ 00:07:21.982 START TEST nvme_sgl 00:07:21.982 ************************************ 00:07:21.982 09:39:09 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:22.248 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:22.248 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:22.248 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:22.248 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:22.248 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:22.248 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:22.248 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:22.248 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:22.248 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:22.248 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:22.248 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:22.248 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:22.248 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:22.248 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:22.539 NVMe Readv/Writev Request test 00:07:22.539 Attached to 0000:00:10.0 00:07:22.539 Attached to 0000:00:11.0 00:07:22.539 Attached to 0000:00:13.0 00:07:22.539 Attached to 0000:00:12.0 00:07:22.539 0000:00:10.0: build_io_request_2 test passed 00:07:22.539 0000:00:10.0: build_io_request_4 test passed 00:07:22.539 0000:00:10.0: build_io_request_5 test passed 00:07:22.539 0000:00:10.0: build_io_request_6 test passed 00:07:22.539 0000:00:10.0: build_io_request_7 test passed 00:07:22.539 0000:00:10.0: build_io_request_10 test passed 00:07:22.539 0000:00:11.0: build_io_request_2 test passed 00:07:22.539 0000:00:11.0: build_io_request_4 test passed 00:07:22.539 0000:00:11.0: build_io_request_5 test passed 00:07:22.539 0000:00:11.0: build_io_request_6 test passed 00:07:22.539 0000:00:11.0: build_io_request_7 test passed 00:07:22.539 0000:00:11.0: build_io_request_10 test passed 00:07:22.539 Cleaning up... 00:07:22.539 00:07:22.539 real 0m0.277s 00:07:22.539 user 0m0.143s 00:07:22.539 sys 0m0.093s 00:07:22.539 09:39:09 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.539 09:39:09 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:22.539 ************************************ 00:07:22.539 END TEST nvme_sgl 00:07:22.539 ************************************ 00:07:22.539 09:39:09 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:22.539 09:39:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.539 09:39:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.539 09:39:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.539 ************************************ 00:07:22.539 START TEST nvme_e2edp 00:07:22.539 ************************************ 00:07:22.539 09:39:09 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:22.539 NVMe Write/Read with End-to-End data protection test 00:07:22.539 Attached to 0000:00:10.0 00:07:22.539 Attached to 0000:00:11.0 00:07:22.539 Attached to 0000:00:13.0 00:07:22.539 Attached to 0000:00:12.0 00:07:22.539 Cleaning up... 00:07:22.539 00:07:22.539 real 0m0.204s 00:07:22.539 user 0m0.068s 00:07:22.539 sys 0m0.093s 00:07:22.539 09:39:10 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.540 09:39:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:22.540 ************************************ 00:07:22.540 END TEST nvme_e2edp 00:07:22.540 ************************************ 00:07:22.540 09:39:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:22.540 09:39:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.540 09:39:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.540 09:39:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.540 ************************************ 00:07:22.540 START TEST nvme_reserve 00:07:22.540 ************************************ 00:07:22.540 09:39:10 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:22.799 ===================================================== 00:07:22.799 NVMe Controller at PCI bus 0, device 16, function 0 00:07:22.799 ===================================================== 00:07:22.799 Reservations: Not Supported 00:07:22.799 ===================================================== 00:07:22.799 NVMe Controller at PCI bus 0, device 17, function 0 00:07:22.799 ===================================================== 00:07:22.799 Reservations: Not Supported 00:07:22.799 ===================================================== 00:07:22.799 NVMe Controller at PCI bus 0, device 19, function 0 00:07:22.799 ===================================================== 00:07:22.799 Reservations: Not Supported 00:07:22.799 ===================================================== 00:07:22.799 NVMe Controller at PCI bus 0, device 18, function 0 00:07:22.799 ===================================================== 00:07:22.799 Reservations: Not Supported 00:07:22.799 Reservation test passed 00:07:22.799 00:07:22.799 real 0m0.213s 00:07:22.799 user 0m0.063s 00:07:22.799 sys 0m0.106s 00:07:22.799 09:39:10 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.799 09:39:10 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:22.799 ************************************ 00:07:22.799 END TEST nvme_reserve 00:07:22.799 ************************************ 00:07:22.799 09:39:10 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:22.799 09:39:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.799 09:39:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.799 09:39:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.799 ************************************ 00:07:22.799 START TEST nvme_err_injection 00:07:22.799 ************************************ 00:07:22.799 09:39:10 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:23.057 NVMe Error Injection test 00:07:23.057 Attached to 0000:00:10.0 00:07:23.057 Attached to 0000:00:11.0 00:07:23.057 Attached to 0000:00:13.0 00:07:23.057 Attached to 0000:00:12.0 00:07:23.057 0000:00:10.0: get features failed as expected 00:07:23.057 0000:00:11.0: get features failed as expected 00:07:23.057 0000:00:13.0: get features failed as expected 00:07:23.057 0000:00:12.0: get features failed as expected 00:07:23.057 0000:00:10.0: get features successfully as expected 00:07:23.057 0000:00:11.0: get features successfully as expected 00:07:23.057 0000:00:13.0: get features successfully as expected 00:07:23.057 0000:00:12.0: get features successfully as expected 00:07:23.057 0000:00:10.0: read failed as expected 00:07:23.057 0000:00:11.0: read failed as expected 00:07:23.057 0000:00:13.0: read failed as expected 00:07:23.057 0000:00:12.0: read failed as expected 00:07:23.057 0000:00:10.0: read successfully as expected 00:07:23.057 0000:00:11.0: read successfully as expected 00:07:23.057 0000:00:13.0: read successfully as expected 00:07:23.057 0000:00:12.0: read successfully as expected 00:07:23.057 Cleaning up... 00:07:23.057 00:07:23.057 real 0m0.232s 00:07:23.057 user 0m0.080s 00:07:23.057 sys 0m0.098s 00:07:23.057 09:39:10 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.057 09:39:10 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:23.057 ************************************ 00:07:23.057 END TEST nvme_err_injection 00:07:23.057 ************************************ 00:07:23.057 09:39:10 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:23.057 09:39:10 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:23.057 09:39:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.057 09:39:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.057 ************************************ 00:07:23.057 START TEST nvme_overhead 00:07:23.057 ************************************ 00:07:23.057 09:39:10 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:24.433 Initializing NVMe Controllers 00:07:24.433 Attached to 0000:00:10.0 00:07:24.433 Attached to 0000:00:11.0 00:07:24.433 Attached to 0000:00:13.0 00:07:24.433 Attached to 0000:00:12.0 00:07:24.433 Initialization complete. Launching workers. 00:07:24.433 submit (in ns) avg, min, max = 11658.0, 10876.2, 73829.2 00:07:24.433 complete (in ns) avg, min, max = 7721.7, 7217.7, 318206.9 00:07:24.433 00:07:24.433 Submit histogram 00:07:24.433 ================ 00:07:24.433 Range in us Cumulative Count 00:07:24.433 10.831 - 10.880: 0.0058% ( 1) 00:07:24.433 10.880 - 10.929: 0.1326% ( 22) 00:07:24.433 10.929 - 10.978: 0.7839% ( 113) 00:07:24.433 10.978 - 11.028: 4.1270% ( 580) 00:07:24.433 11.028 - 11.077: 12.7385% ( 1494) 00:07:24.433 11.077 - 11.126: 29.4945% ( 2907) 00:07:24.433 11.126 - 11.175: 48.5042% ( 3298) 00:07:24.433 11.175 - 11.225: 62.6837% ( 2460) 00:07:24.433 11.225 - 11.274: 70.9551% ( 1435) 00:07:24.433 11.274 - 11.323: 75.5951% ( 805) 00:07:24.433 11.323 - 11.372: 77.7566% ( 375) 00:07:24.433 11.372 - 11.422: 79.0363% ( 222) 00:07:24.433 11.422 - 11.471: 79.8086% ( 134) 00:07:24.433 11.471 - 11.520: 80.4946% ( 119) 00:07:24.433 11.520 - 11.569: 81.1286% ( 110) 00:07:24.433 11.569 - 11.618: 81.8549% ( 126) 00:07:24.433 11.618 - 11.668: 82.5927% ( 128) 00:07:24.433 11.668 - 11.717: 83.3766% ( 136) 00:07:24.433 11.717 - 11.766: 83.9241% ( 95) 00:07:24.433 11.766 - 11.815: 84.3853% ( 80) 00:07:24.433 11.815 - 11.865: 84.8118% ( 74) 00:07:24.433 11.865 - 11.914: 85.5323% ( 125) 00:07:24.433 11.914 - 11.963: 86.2816% ( 130) 00:07:24.433 11.963 - 12.012: 87.3883% ( 192) 00:07:24.433 12.012 - 12.062: 88.5354% ( 199) 00:07:24.433 12.062 - 12.111: 89.7228% ( 206) 00:07:24.433 12.111 - 12.160: 90.7776% ( 183) 00:07:24.433 12.160 - 12.209: 91.7286% ( 165) 00:07:24.433 12.209 - 12.258: 92.7200% ( 172) 00:07:24.433 12.258 - 12.308: 93.5674% ( 147) 00:07:24.433 12.308 - 12.357: 94.1956% ( 109) 00:07:24.433 12.357 - 12.406: 94.7490% ( 96) 00:07:24.433 12.406 - 12.455: 95.1121% ( 63) 00:07:24.433 12.455 - 12.505: 95.4464% ( 58) 00:07:24.433 12.505 - 12.554: 95.6078% ( 28) 00:07:24.433 12.554 - 12.603: 95.7634% ( 27) 00:07:24.433 12.603 - 12.702: 95.9421% ( 31) 00:07:24.433 12.702 - 12.800: 96.0171% ( 13) 00:07:24.433 12.800 - 12.898: 96.0805% ( 11) 00:07:24.433 12.898 - 12.997: 96.1266% ( 8) 00:07:24.433 12.997 - 13.095: 96.1785% ( 9) 00:07:24.433 13.095 - 13.194: 96.2764% ( 17) 00:07:24.433 13.194 - 13.292: 96.4090% ( 23) 00:07:24.433 13.292 - 13.391: 96.5358% ( 22) 00:07:24.433 13.391 - 13.489: 96.6108% ( 13) 00:07:24.433 13.489 - 13.588: 96.6915% ( 14) 00:07:24.433 13.588 - 13.686: 96.8010% ( 19) 00:07:24.433 13.686 - 13.785: 96.8644% ( 11) 00:07:24.433 13.785 - 13.883: 96.8932% ( 5) 00:07:24.433 13.883 - 13.982: 96.9335% ( 7) 00:07:24.433 13.982 - 14.080: 96.9393% ( 1) 00:07:24.433 14.080 - 14.178: 96.9681% ( 5) 00:07:24.433 14.178 - 14.277: 97.0142% ( 8) 00:07:24.433 14.277 - 14.375: 97.0315% ( 3) 00:07:24.433 14.375 - 14.474: 97.0603% ( 5) 00:07:24.433 14.474 - 14.572: 97.0892% ( 5) 00:07:24.433 14.572 - 14.671: 97.1122% ( 4) 00:07:24.433 14.671 - 14.769: 97.1353% ( 4) 00:07:24.433 14.769 - 14.868: 97.1468% ( 2) 00:07:24.433 14.868 - 14.966: 97.1872% ( 7) 00:07:24.433 14.966 - 15.065: 97.2217% ( 6) 00:07:24.433 15.065 - 15.163: 97.2333% ( 2) 00:07:24.433 15.163 - 15.262: 97.2967% ( 11) 00:07:24.433 15.262 - 15.360: 97.3255% ( 5) 00:07:24.433 15.360 - 15.458: 97.3601% ( 6) 00:07:24.433 15.458 - 15.557: 97.3947% ( 6) 00:07:24.433 15.557 - 15.655: 97.4062% ( 2) 00:07:24.433 15.655 - 15.754: 97.4235% ( 3) 00:07:24.433 15.754 - 15.852: 97.4465% ( 4) 00:07:24.433 15.852 - 15.951: 97.4638% ( 3) 00:07:24.433 15.951 - 16.049: 97.4696% ( 1) 00:07:24.433 16.049 - 16.148: 97.4754% ( 1) 00:07:24.433 16.246 - 16.345: 97.4869% ( 2) 00:07:24.433 16.443 - 16.542: 97.4984% ( 2) 00:07:24.433 16.640 - 16.738: 97.5099% ( 2) 00:07:24.433 16.738 - 16.837: 97.5445% ( 6) 00:07:24.433 16.837 - 16.935: 97.5849% ( 7) 00:07:24.433 16.935 - 17.034: 97.6368% ( 9) 00:07:24.433 17.034 - 17.132: 97.7002% ( 11) 00:07:24.433 17.132 - 17.231: 97.7693% ( 12) 00:07:24.433 17.231 - 17.329: 97.8673% ( 17) 00:07:24.433 17.329 - 17.428: 97.9250% ( 10) 00:07:24.433 17.428 - 17.526: 97.9884% ( 11) 00:07:24.433 17.526 - 17.625: 98.1094% ( 21) 00:07:24.433 17.625 - 17.723: 98.1728% ( 11) 00:07:24.433 17.723 - 17.822: 98.2247% ( 9) 00:07:24.433 17.822 - 17.920: 98.2650% ( 7) 00:07:24.433 17.920 - 18.018: 98.3342% ( 12) 00:07:24.433 18.018 - 18.117: 98.4091% ( 13) 00:07:24.433 18.117 - 18.215: 98.4841% ( 13) 00:07:24.433 18.215 - 18.314: 98.5302% ( 8) 00:07:24.433 18.314 - 18.412: 98.5763% ( 8) 00:07:24.433 18.412 - 18.511: 98.5878% ( 2) 00:07:24.433 18.511 - 18.609: 98.6109% ( 4) 00:07:24.433 18.609 - 18.708: 98.6339% ( 4) 00:07:24.433 18.708 - 18.806: 98.6743% ( 7) 00:07:24.433 18.806 - 18.905: 98.7089% ( 6) 00:07:24.433 18.905 - 19.003: 98.7377% ( 5) 00:07:24.433 19.102 - 19.200: 98.7492% ( 2) 00:07:24.433 19.200 - 19.298: 98.7896% ( 7) 00:07:24.433 19.298 - 19.397: 98.8011% ( 2) 00:07:24.433 19.397 - 19.495: 98.8184% ( 3) 00:07:24.433 19.495 - 19.594: 98.8357% ( 3) 00:07:24.433 19.594 - 19.692: 98.8472% ( 2) 00:07:24.433 19.791 - 19.889: 98.8587% ( 2) 00:07:24.433 19.988 - 20.086: 98.8645% ( 1) 00:07:24.433 20.086 - 20.185: 98.8760% ( 2) 00:07:24.433 20.185 - 20.283: 98.8875% ( 2) 00:07:24.433 20.283 - 20.382: 98.8933% ( 1) 00:07:24.433 20.382 - 20.480: 98.8991% ( 1) 00:07:24.433 20.480 - 20.578: 98.9106% ( 2) 00:07:24.433 20.578 - 20.677: 98.9164% ( 1) 00:07:24.433 20.775 - 20.874: 98.9337% ( 3) 00:07:24.433 21.071 - 21.169: 98.9394% ( 1) 00:07:24.433 21.268 - 21.366: 98.9452% ( 1) 00:07:24.433 21.465 - 21.563: 98.9567% ( 2) 00:07:24.433 21.563 - 21.662: 98.9740% ( 3) 00:07:24.433 21.858 - 21.957: 98.9855% ( 2) 00:07:24.433 22.055 - 22.154: 98.9913% ( 1) 00:07:24.433 22.351 - 22.449: 98.9971% ( 1) 00:07:24.433 22.548 - 22.646: 99.0028% ( 1) 00:07:24.433 22.646 - 22.745: 99.0144% ( 2) 00:07:24.433 22.745 - 22.843: 99.0201% ( 1) 00:07:24.433 22.843 - 22.942: 99.0259% ( 1) 00:07:24.433 22.942 - 23.040: 99.0316% ( 1) 00:07:24.433 23.040 - 23.138: 99.0374% ( 1) 00:07:24.433 23.335 - 23.434: 99.0432% ( 1) 00:07:24.433 23.631 - 23.729: 99.0489% ( 1) 00:07:24.433 23.729 - 23.828: 99.0547% ( 1) 00:07:24.433 23.828 - 23.926: 99.0605% ( 1) 00:07:24.433 23.926 - 24.025: 99.0662% ( 1) 00:07:24.433 24.222 - 24.320: 99.0720% ( 1) 00:07:24.433 24.615 - 24.714: 99.0778% ( 1) 00:07:24.433 24.911 - 25.009: 99.0835% ( 1) 00:07:24.433 26.388 - 26.585: 99.0893% ( 1) 00:07:24.433 27.175 - 27.372: 99.0950% ( 1) 00:07:24.433 27.372 - 27.569: 99.1008% ( 1) 00:07:24.433 29.538 - 29.735: 99.1066% ( 1) 00:07:24.433 30.523 - 30.720: 99.1354% ( 5) 00:07:24.433 30.720 - 30.917: 99.4063% ( 47) 00:07:24.433 30.917 - 31.114: 99.6657% ( 45) 00:07:24.433 31.114 - 31.311: 99.7406% ( 13) 00:07:24.433 31.311 - 31.508: 99.7579% ( 3) 00:07:24.433 31.508 - 31.705: 99.7867% ( 5) 00:07:24.433 31.705 - 31.902: 99.8213% ( 6) 00:07:24.433 31.902 - 32.098: 99.8617% ( 7) 00:07:24.433 32.295 - 32.492: 99.8674% ( 1) 00:07:24.433 32.689 - 32.886: 99.8790% ( 2) 00:07:24.433 35.052 - 35.249: 99.8847% ( 1) 00:07:24.433 37.415 - 37.612: 99.8905% ( 1) 00:07:24.433 37.612 - 37.809: 99.8962% ( 1) 00:07:24.433 38.794 - 38.991: 99.9020% ( 1) 00:07:24.433 40.763 - 40.960: 99.9078% ( 1) 00:07:24.433 40.960 - 41.157: 99.9135% ( 1) 00:07:24.433 41.157 - 41.354: 99.9193% ( 1) 00:07:24.433 42.732 - 42.929: 99.9251% ( 1) 00:07:24.433 44.505 - 44.702: 99.9308% ( 1) 00:07:24.433 45.095 - 45.292: 99.9366% ( 1) 00:07:24.433 46.277 - 46.474: 99.9424% ( 1) 00:07:24.433 47.655 - 47.852: 99.9539% ( 2) 00:07:24.433 48.049 - 48.246: 99.9597% ( 1) 00:07:24.433 49.231 - 49.428: 99.9654% ( 1) 00:07:24.433 50.412 - 50.806: 99.9712% ( 1) 00:07:24.433 50.806 - 51.200: 99.9769% ( 1) 00:07:24.433 53.169 - 53.563: 99.9827% ( 1) 00:07:24.433 54.745 - 55.138: 99.9885% ( 1) 00:07:24.433 71.286 - 71.680: 99.9942% ( 1) 00:07:24.433 73.649 - 74.043: 100.0000% ( 1) 00:07:24.433 00:07:24.433 Complete histogram 00:07:24.434 ================== 00:07:24.434 Range in us Cumulative Count 00:07:24.434 7.188 - 7.237: 0.0461% ( 8) 00:07:24.434 7.237 - 7.286: 1.2508% ( 209) 00:07:24.434 7.286 - 7.335: 10.0236% ( 1522) 00:07:24.434 7.335 - 7.385: 29.8980% ( 3448) 00:07:24.434 7.385 - 7.434: 52.1413% ( 3859) 00:07:24.434 7.434 - 7.483: 68.7705% ( 2885) 00:07:24.434 7.483 - 7.532: 80.1141% ( 1968) 00:07:24.434 7.532 - 7.582: 86.8407% ( 1167) 00:07:24.434 7.582 - 7.631: 91.1983% ( 756) 00:07:24.434 7.631 - 7.680: 93.3137% ( 367) 00:07:24.434 7.680 - 7.729: 94.3916% ( 187) 00:07:24.434 7.729 - 7.778: 95.0141% ( 108) 00:07:24.434 7.778 - 7.828: 95.3139% ( 52) 00:07:24.434 7.828 - 7.877: 95.5502% ( 41) 00:07:24.434 7.877 - 7.926: 95.6712% ( 21) 00:07:24.434 7.926 - 7.975: 95.7404% ( 12) 00:07:24.434 7.975 - 8.025: 95.7807% ( 7) 00:07:24.434 8.025 - 8.074: 95.8499% ( 12) 00:07:24.434 8.074 - 8.123: 95.8903% ( 7) 00:07:24.434 8.123 - 8.172: 95.9594% ( 12) 00:07:24.434 8.172 - 8.222: 96.1381% ( 31) 00:07:24.434 8.222 - 8.271: 96.2995% ( 28) 00:07:24.434 8.271 - 8.320: 96.6165% ( 55) 00:07:24.434 8.320 - 8.369: 96.8356% ( 38) 00:07:24.434 8.369 - 8.418: 97.0373% ( 35) 00:07:24.434 8.418 - 8.468: 97.1353% ( 17) 00:07:24.434 8.468 - 8.517: 97.2333% ( 17) 00:07:24.434 8.517 - 8.566: 97.2851% ( 9) 00:07:24.434 8.566 - 8.615: 97.3197% ( 6) 00:07:24.434 8.615 - 8.665: 97.3428% ( 4) 00:07:24.434 8.665 - 8.714: 97.3601% ( 3) 00:07:24.434 8.714 - 8.763: 97.3658% ( 1) 00:07:24.434 8.763 - 8.812: 97.3831% ( 3) 00:07:24.434 8.862 - 8.911: 97.4004% ( 3) 00:07:24.434 8.911 - 8.960: 97.4062% ( 1) 00:07:24.434 9.058 - 9.108: 97.4120% ( 1) 00:07:24.434 9.157 - 9.206: 97.4177% ( 1) 00:07:24.434 9.206 - 9.255: 97.4292% ( 2) 00:07:24.434 9.255 - 9.305: 97.4350% ( 1) 00:07:24.434 9.354 - 9.403: 97.4408% ( 1) 00:07:24.434 9.600 - 9.649: 97.4465% ( 1) 00:07:24.434 9.698 - 9.748: 97.4523% ( 1) 00:07:24.434 9.748 - 9.797: 97.4638% ( 2) 00:07:24.434 9.846 - 9.895: 97.4811% ( 3) 00:07:24.434 9.945 - 9.994: 97.4927% ( 2) 00:07:24.434 10.043 - 10.092: 97.4984% ( 1) 00:07:24.434 10.142 - 10.191: 97.5042% ( 1) 00:07:24.434 10.191 - 10.240: 97.5099% ( 1) 00:07:24.434 10.240 - 10.289: 97.5157% ( 1) 00:07:24.434 10.289 - 10.338: 97.5388% ( 4) 00:07:24.434 10.338 - 10.388: 97.5445% ( 1) 00:07:24.434 10.388 - 10.437: 97.5561% ( 2) 00:07:24.434 10.437 - 10.486: 97.5791% ( 4) 00:07:24.434 10.486 - 10.535: 97.5849% ( 1) 00:07:24.434 10.535 - 10.585: 97.5964% ( 2) 00:07:24.434 10.634 - 10.683: 97.6022% ( 1) 00:07:24.434 10.782 - 10.831: 97.6195% ( 3) 00:07:24.434 10.831 - 10.880: 97.6252% ( 1) 00:07:24.434 10.880 - 10.929: 97.6310% ( 1) 00:07:24.434 10.978 - 11.028: 97.6368% ( 1) 00:07:24.434 11.028 - 11.077: 97.6425% ( 1) 00:07:24.434 11.077 - 11.126: 97.6540% ( 2) 00:07:24.434 11.126 - 11.175: 97.6713% ( 3) 00:07:24.434 11.372 - 11.422: 97.6771% ( 1) 00:07:24.434 11.569 - 11.618: 97.6829% ( 1) 00:07:24.434 11.618 - 11.668: 97.6886% ( 1) 00:07:24.434 11.717 - 11.766: 97.6944% ( 1) 00:07:24.434 11.766 - 11.815: 97.7002% ( 1) 00:07:24.434 11.815 - 11.865: 97.7059% ( 1) 00:07:24.434 12.012 - 12.062: 97.7117% ( 1) 00:07:24.434 12.209 - 12.258: 97.7174% ( 1) 00:07:24.434 12.308 - 12.357: 97.7232% ( 1) 00:07:24.434 12.505 - 12.554: 97.7290% ( 1) 00:07:24.434 12.603 - 12.702: 97.7347% ( 1) 00:07:24.434 12.800 - 12.898: 97.7520% ( 3) 00:07:24.434 12.898 - 12.997: 97.7578% ( 1) 00:07:24.434 13.095 - 13.194: 97.8212% ( 11) 00:07:24.434 13.194 - 13.292: 97.8673% ( 8) 00:07:24.434 13.292 - 13.391: 97.8961% ( 5) 00:07:24.434 13.391 - 13.489: 97.9365% ( 7) 00:07:24.434 13.489 - 13.588: 98.0114% ( 13) 00:07:24.434 13.588 - 13.686: 98.0863% ( 13) 00:07:24.434 13.686 - 13.785: 98.1267% ( 7) 00:07:24.434 13.785 - 13.883: 98.1901% ( 11) 00:07:24.434 13.883 - 13.982: 98.2535% ( 11) 00:07:24.434 13.982 - 14.080: 98.3457% ( 16) 00:07:24.434 14.080 - 14.178: 98.3745% ( 5) 00:07:24.434 14.178 - 14.277: 98.4322% ( 10) 00:07:24.434 14.277 - 14.375: 98.4898% ( 10) 00:07:24.434 14.375 - 14.474: 98.5475% ( 10) 00:07:24.434 14.474 - 14.572: 98.6224% ( 13) 00:07:24.434 14.572 - 14.671: 98.6627% ( 7) 00:07:24.434 14.671 - 14.769: 98.7089% ( 8) 00:07:24.434 14.769 - 14.868: 98.7204% ( 2) 00:07:24.434 14.868 - 14.966: 98.7434% ( 4) 00:07:24.434 14.966 - 15.065: 98.7607% ( 3) 00:07:24.434 15.065 - 15.163: 98.7953% ( 6) 00:07:24.434 15.163 - 15.262: 98.8357% ( 7) 00:07:24.434 15.262 - 15.360: 98.8472% ( 2) 00:07:24.434 15.360 - 15.458: 98.8587% ( 2) 00:07:24.434 15.458 - 15.557: 98.8818% ( 4) 00:07:24.434 15.557 - 15.655: 98.8991% ( 3) 00:07:24.434 15.655 - 15.754: 98.9048% ( 1) 00:07:24.434 15.754 - 15.852: 98.9164% ( 2) 00:07:24.434 15.852 - 15.951: 98.9221% ( 1) 00:07:24.434 15.951 - 16.049: 98.9337% ( 2) 00:07:24.434 16.148 - 16.246: 98.9394% ( 1) 00:07:24.434 16.345 - 16.443: 98.9452% ( 1) 00:07:24.434 16.443 - 16.542: 98.9625% ( 3) 00:07:24.434 16.542 - 16.640: 98.9682% ( 1) 00:07:24.434 16.935 - 17.034: 98.9798% ( 2) 00:07:24.434 17.034 - 17.132: 98.9855% ( 1) 00:07:24.434 17.132 - 17.231: 99.0028% ( 3) 00:07:24.434 17.231 - 17.329: 99.0086% ( 1) 00:07:24.434 17.625 - 17.723: 99.0201% ( 2) 00:07:24.434 17.723 - 17.822: 99.0259% ( 1) 00:07:24.434 17.822 - 17.920: 99.0316% ( 1) 00:07:24.434 17.920 - 18.018: 99.0374% ( 1) 00:07:24.434 18.117 - 18.215: 99.0547% ( 3) 00:07:24.434 18.314 - 18.412: 99.0662% ( 2) 00:07:24.434 18.905 - 19.003: 99.0720% ( 1) 00:07:24.434 19.200 - 19.298: 99.0778% ( 1) 00:07:24.434 19.791 - 19.889: 99.0835% ( 1) 00:07:24.434 20.185 - 20.283: 99.0893% ( 1) 00:07:24.434 20.480 - 20.578: 99.0950% ( 1) 00:07:24.434 20.578 - 20.677: 99.1008% ( 1) 00:07:24.434 20.677 - 20.775: 99.1066% ( 1) 00:07:24.434 20.874 - 20.972: 99.1123% ( 1) 00:07:24.434 20.972 - 21.071: 99.1181% ( 1) 00:07:24.434 21.071 - 21.169: 99.1239% ( 1) 00:07:24.434 21.169 - 21.268: 99.1296% ( 1) 00:07:24.434 21.563 - 21.662: 99.1354% ( 1) 00:07:24.434 21.858 - 21.957: 99.1700% ( 6) 00:07:24.434 21.957 - 22.055: 99.3371% ( 29) 00:07:24.434 22.055 - 22.154: 99.6196% ( 49) 00:07:24.434 22.154 - 22.252: 99.7810% ( 28) 00:07:24.434 22.252 - 22.351: 99.8444% ( 11) 00:07:24.434 22.351 - 22.449: 99.8674% ( 4) 00:07:24.434 22.449 - 22.548: 99.9020% ( 6) 00:07:24.434 22.548 - 22.646: 99.9193% ( 3) 00:07:24.434 23.040 - 23.138: 99.9251% ( 1) 00:07:24.434 23.237 - 23.335: 99.9308% ( 1) 00:07:24.434 23.729 - 23.828: 99.9366% ( 1) 00:07:24.434 24.615 - 24.714: 99.9424% ( 1) 00:07:24.434 27.372 - 27.569: 99.9481% ( 1) 00:07:24.434 30.129 - 30.326: 99.9539% ( 1) 00:07:24.434 32.492 - 32.689: 99.9597% ( 1) 00:07:24.434 37.809 - 38.006: 99.9769% ( 3) 00:07:24.434 38.400 - 38.597: 99.9827% ( 1) 00:07:24.434 39.778 - 39.975: 99.9885% ( 1) 00:07:24.434 48.246 - 48.443: 99.9942% ( 1) 00:07:24.434 316.652 - 318.228: 100.0000% ( 1) 00:07:24.434 00:07:24.434 00:07:24.434 real 0m1.216s 00:07:24.434 user 0m1.067s 00:07:24.434 sys 0m0.102s 00:07:24.434 09:39:11 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.434 09:39:11 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:24.434 ************************************ 00:07:24.434 END TEST nvme_overhead 00:07:24.434 ************************************ 00:07:24.434 09:39:11 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:24.434 09:39:11 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:24.434 09:39:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.434 09:39:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.434 ************************************ 00:07:24.434 START TEST nvme_arbitration 00:07:24.434 ************************************ 00:07:24.434 09:39:11 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:27.718 Initializing NVMe Controllers 00:07:27.718 Attached to 0000:00:10.0 00:07:27.718 Attached to 0000:00:11.0 00:07:27.718 Attached to 0000:00:13.0 00:07:27.718 Attached to 0000:00:12.0 00:07:27.718 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:27.718 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:27.718 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:27.718 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:27.718 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:27.718 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:27.718 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:27.718 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:27.718 Initialization complete. Launching workers. 00:07:27.718 Starting thread on core 1 with urgent priority queue 00:07:27.718 Starting thread on core 2 with urgent priority queue 00:07:27.718 Starting thread on core 3 with urgent priority queue 00:07:27.718 Starting thread on core 0 with urgent priority queue 00:07:27.718 QEMU NVMe Ctrl (12340 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:27.718 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:27.718 QEMU NVMe Ctrl (12341 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:07:27.718 QEMU NVMe Ctrl (12342 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:07:27.718 QEMU NVMe Ctrl (12343 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:07:27.718 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:27.718 ======================================================== 00:07:27.718 00:07:27.718 00:07:27.718 real 0m3.314s 00:07:27.718 user 0m9.282s 00:07:27.718 sys 0m0.114s 00:07:27.718 09:39:15 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.718 09:39:15 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:27.718 ************************************ 00:07:27.718 END TEST nvme_arbitration 00:07:27.718 ************************************ 00:07:27.718 09:39:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:27.718 09:39:15 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:27.718 09:39:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.718 09:39:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.718 ************************************ 00:07:27.718 START TEST nvme_single_aen 00:07:27.718 ************************************ 00:07:27.718 09:39:15 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:27.976 Asynchronous Event Request test 00:07:27.976 Attached to 0000:00:10.0 00:07:27.976 Attached to 0000:00:11.0 00:07:27.976 Attached to 0000:00:13.0 00:07:27.976 Attached to 0000:00:12.0 00:07:27.976 Reset controller to setup AER completions for this process 00:07:27.976 Registering asynchronous event callbacks... 00:07:27.976 Getting orig temperature thresholds of all controllers 00:07:27.976 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:27.976 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:27.976 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:27.976 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:27.976 Setting all controllers temperature threshold low to trigger AER 00:07:27.976 Waiting for all controllers temperature threshold to be set lower 00:07:27.976 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:27.976 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:27.976 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:27.976 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:27.976 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:27.976 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:27.976 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:27.976 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:27.976 Waiting for all controllers to trigger AER and reset threshold 00:07:27.976 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:27.976 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:27.976 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:27.976 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:27.976 Cleaning up... 00:07:27.976 00:07:27.976 real 0m0.218s 00:07:27.976 user 0m0.074s 00:07:27.976 sys 0m0.096s 00:07:27.976 09:39:15 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.976 09:39:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:27.976 ************************************ 00:07:27.976 END TEST nvme_single_aen 00:07:27.976 ************************************ 00:07:27.976 09:39:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:27.976 09:39:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.976 09:39:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.976 09:39:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.976 ************************************ 00:07:27.976 START TEST nvme_doorbell_aers 00:07:27.976 ************************************ 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:27.976 09:39:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:28.233 [2024-12-05 09:39:15.797415] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:07:38.207 Executing: test_write_invalid_db 00:07:38.207 Waiting for AER completion... 00:07:38.207 Failure: test_write_invalid_db 00:07:38.207 00:07:38.207 Executing: test_invalid_db_write_overflow_sq 00:07:38.207 Waiting for AER completion... 00:07:38.207 Failure: test_invalid_db_write_overflow_sq 00:07:38.207 00:07:38.207 Executing: test_invalid_db_write_overflow_cq 00:07:38.207 Waiting for AER completion... 00:07:38.207 Failure: test_invalid_db_write_overflow_cq 00:07:38.207 00:07:38.207 09:39:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:38.207 09:39:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:38.207 [2024-12-05 09:39:25.821425] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:07:48.241 Executing: test_write_invalid_db 00:07:48.241 Waiting for AER completion... 00:07:48.241 Failure: test_write_invalid_db 00:07:48.241 00:07:48.241 Executing: test_invalid_db_write_overflow_sq 00:07:48.241 Waiting for AER completion... 00:07:48.241 Failure: test_invalid_db_write_overflow_sq 00:07:48.241 00:07:48.241 Executing: test_invalid_db_write_overflow_cq 00:07:48.241 Waiting for AER completion... 00:07:48.241 Failure: test_invalid_db_write_overflow_cq 00:07:48.241 00:07:48.241 09:39:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:48.241 09:39:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:48.241 [2024-12-05 09:39:35.859451] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:07:58.215 Executing: test_write_invalid_db 00:07:58.215 Waiting for AER completion... 00:07:58.215 Failure: test_write_invalid_db 00:07:58.215 00:07:58.215 Executing: test_invalid_db_write_overflow_sq 00:07:58.215 Waiting for AER completion... 00:07:58.215 Failure: test_invalid_db_write_overflow_sq 00:07:58.215 00:07:58.215 Executing: test_invalid_db_write_overflow_cq 00:07:58.215 Waiting for AER completion... 00:07:58.215 Failure: test_invalid_db_write_overflow_cq 00:07:58.215 00:07:58.215 09:39:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:58.215 09:39:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:58.472 [2024-12-05 09:39:45.934996] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 Executing: test_write_invalid_db 00:08:08.454 Waiting for AER completion... 00:08:08.454 Failure: test_write_invalid_db 00:08:08.454 00:08:08.454 Executing: test_invalid_db_write_overflow_sq 00:08:08.454 Waiting for AER completion... 00:08:08.454 Failure: test_invalid_db_write_overflow_sq 00:08:08.454 00:08:08.454 Executing: test_invalid_db_write_overflow_cq 00:08:08.454 Waiting for AER completion... 00:08:08.454 Failure: test_invalid_db_write_overflow_cq 00:08:08.454 00:08:08.454 00:08:08.454 real 0m40.187s 00:08:08.454 user 0m34.154s 00:08:08.454 sys 0m5.632s 00:08:08.454 09:39:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.454 09:39:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:08.454 ************************************ 00:08:08.454 END TEST nvme_doorbell_aers 00:08:08.454 ************************************ 00:08:08.454 09:39:55 nvme -- nvme/nvme.sh@97 -- # uname 00:08:08.454 09:39:55 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:08.454 09:39:55 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:08.454 09:39:55 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:08.454 09:39:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.454 09:39:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.454 ************************************ 00:08:08.454 START TEST nvme_multi_aen 00:08:08.454 ************************************ 00:08:08.454 09:39:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:08.454 [2024-12-05 09:39:55.945611] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.945672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.945684] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.947388] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.947430] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.947442] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.948588] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.948618] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.948627] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.949742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.949768] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 [2024-12-05 09:39:55.949777] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:08:08.454 Child process pid: 63696 00:08:08.715 [Child] Asynchronous Event Request test 00:08:08.715 [Child] Attached to 0000:00:10.0 00:08:08.715 [Child] Attached to 0000:00:11.0 00:08:08.715 [Child] Attached to 0000:00:13.0 00:08:08.715 [Child] Attached to 0000:00:12.0 00:08:08.715 [Child] Registering asynchronous event callbacks... 00:08:08.715 [Child] Getting orig temperature thresholds of all controllers 00:08:08.715 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.715 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.715 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.715 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.715 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:08.715 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.715 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.715 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.715 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.715 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.715 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.715 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.715 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.715 [Child] Cleaning up... 00:08:08.715 Asynchronous Event Request test 00:08:08.715 Attached to 0000:00:10.0 00:08:08.715 Attached to 0000:00:11.0 00:08:08.715 Attached to 0000:00:13.0 00:08:08.715 Attached to 0000:00:12.0 00:08:08.715 Reset controller to setup AER completions for this process 00:08:08.715 Registering asynchronous event callbacks... 00:08:08.715 Getting orig temperature thresholds of all controllers 00:08:08.715 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.715 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.716 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.716 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.716 Setting all controllers temperature threshold low to trigger AER 00:08:08.716 Waiting for all controllers temperature threshold to be set lower 00:08:08.716 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.716 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:08.716 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.716 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:08.716 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.716 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:08.716 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.716 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:08.716 Waiting for all controllers to trigger AER and reset threshold 00:08:08.716 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.716 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.716 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.716 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.716 Cleaning up... 00:08:08.716 00:08:08.716 real 0m0.456s 00:08:08.716 user 0m0.156s 00:08:08.716 sys 0m0.188s 00:08:08.716 09:39:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.716 09:39:56 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:08.716 ************************************ 00:08:08.716 END TEST nvme_multi_aen 00:08:08.716 ************************************ 00:08:08.716 09:39:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:08.716 09:39:56 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:08.716 09:39:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.716 09:39:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.716 ************************************ 00:08:08.716 START TEST nvme_startup 00:08:08.716 ************************************ 00:08:08.716 09:39:56 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:08.975 Initializing NVMe Controllers 00:08:08.975 Attached to 0000:00:10.0 00:08:08.975 Attached to 0000:00:11.0 00:08:08.975 Attached to 0000:00:13.0 00:08:08.975 Attached to 0000:00:12.0 00:08:08.975 Initialization complete. 00:08:08.975 Time used:142765.078 (us). 00:08:08.975 00:08:08.975 real 0m0.205s 00:08:08.975 user 0m0.071s 00:08:08.975 sys 0m0.091s 00:08:08.975 ************************************ 00:08:08.975 END TEST nvme_startup 00:08:08.975 ************************************ 00:08:08.975 09:39:56 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.975 09:39:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:08.975 09:39:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:08.975 09:39:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.975 09:39:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.975 09:39:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.975 ************************************ 00:08:08.975 START TEST nvme_multi_secondary 00:08:08.975 ************************************ 00:08:08.975 09:39:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:08.975 09:39:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63747 00:08:08.975 09:39:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:08.975 09:39:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63748 00:08:08.975 09:39:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:08.975 09:39:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:12.271 Initializing NVMe Controllers 00:08:12.271 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:12.271 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:12.271 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:12.271 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:12.271 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:12.271 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:12.271 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:12.271 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:12.271 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:12.271 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:12.271 Initialization complete. Launching workers. 00:08:12.271 ======================================================== 00:08:12.271 Latency(us) 00:08:12.271 Device Information : IOPS MiB/s Average min max 00:08:12.271 PCIE (0000:00:10.0) NSID 1 from core 1: 8055.77 31.47 1984.82 699.72 5679.18 00:08:12.271 PCIE (0000:00:11.0) NSID 1 from core 1: 8055.77 31.47 1985.75 721.21 5945.19 00:08:12.271 PCIE (0000:00:13.0) NSID 1 from core 1: 8055.77 31.47 1985.72 715.43 5971.88 00:08:12.271 PCIE (0000:00:12.0) NSID 1 from core 1: 8055.77 31.47 1985.80 707.65 5757.29 00:08:12.271 PCIE (0000:00:12.0) NSID 2 from core 1: 8055.77 31.47 1985.76 716.46 5215.63 00:08:12.271 PCIE (0000:00:12.0) NSID 3 from core 1: 8055.77 31.47 1985.74 714.32 5501.69 00:08:12.271 ======================================================== 00:08:12.271 Total : 48334.63 188.81 1985.60 699.72 5971.88 00:08:12.271 00:08:12.271 Initializing NVMe Controllers 00:08:12.271 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:12.271 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:12.271 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:12.271 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:12.271 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:12.271 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:12.271 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:12.271 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:12.271 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:12.271 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:12.271 Initialization complete. Launching workers. 00:08:12.272 ======================================================== 00:08:12.272 Latency(us) 00:08:12.272 Device Information : IOPS MiB/s Average min max 00:08:12.272 PCIE (0000:00:10.0) NSID 1 from core 2: 3235.47 12.64 4943.96 971.61 12997.76 00:08:12.272 PCIE (0000:00:11.0) NSID 1 from core 2: 3235.47 12.64 4944.94 1016.38 12235.90 00:08:12.272 PCIE (0000:00:13.0) NSID 1 from core 2: 3235.47 12.64 4944.94 1045.83 12800.47 00:08:12.272 PCIE (0000:00:12.0) NSID 1 from core 2: 3235.47 12.64 4944.90 1078.99 13081.01 00:08:12.272 PCIE (0000:00:12.0) NSID 2 from core 2: 3235.47 12.64 4944.45 1079.50 13458.69 00:08:12.272 PCIE (0000:00:12.0) NSID 3 from core 2: 3235.47 12.64 4944.82 875.07 13628.74 00:08:12.272 ======================================================== 00:08:12.272 Total : 19412.81 75.83 4944.67 875.07 13628.74 00:08:12.272 00:08:12.272 09:39:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63747 00:08:14.817 Initializing NVMe Controllers 00:08:14.817 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:14.817 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:14.817 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:14.817 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:14.817 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:14.818 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:14.818 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:14.818 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:14.818 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:14.818 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:14.818 Initialization complete. Launching workers. 00:08:14.818 ======================================================== 00:08:14.818 Latency(us) 00:08:14.818 Device Information : IOPS MiB/s Average min max 00:08:14.818 PCIE (0000:00:10.0) NSID 1 from core 0: 11375.64 44.44 1405.30 668.47 5902.86 00:08:14.818 PCIE (0000:00:11.0) NSID 1 from core 0: 11375.64 44.44 1406.12 688.99 6005.84 00:08:14.818 PCIE (0000:00:13.0) NSID 1 from core 0: 11375.64 44.44 1406.10 684.95 6357.93 00:08:14.818 PCIE (0000:00:12.0) NSID 1 from core 0: 11375.64 44.44 1406.08 681.33 6428.94 00:08:14.818 PCIE (0000:00:12.0) NSID 2 from core 0: 11375.64 44.44 1406.05 633.65 5951.36 00:08:14.818 PCIE (0000:00:12.0) NSID 3 from core 0: 11375.64 44.44 1406.03 598.36 5740.15 00:08:14.818 ======================================================== 00:08:14.818 Total : 68253.83 266.62 1405.95 598.36 6428.94 00:08:14.818 00:08:14.818 09:40:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63748 00:08:14.818 09:40:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63822 00:08:14.818 09:40:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63823 00:08:14.818 09:40:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:14.818 09:40:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:14.818 09:40:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:18.118 Initializing NVMe Controllers 00:08:18.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.118 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.118 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.118 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:18.118 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:18.118 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:18.118 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:18.118 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:18.118 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:18.118 Initialization complete. Launching workers. 00:08:18.118 ======================================================== 00:08:18.118 Latency(us) 00:08:18.118 Device Information : IOPS MiB/s Average min max 00:08:18.118 PCIE (0000:00:10.0) NSID 1 from core 0: 7820.93 30.55 2044.44 720.23 6119.46 00:08:18.118 PCIE (0000:00:11.0) NSID 1 from core 0: 7820.93 30.55 2045.47 742.26 6030.78 00:08:18.118 PCIE (0000:00:13.0) NSID 1 from core 0: 7820.93 30.55 2045.49 743.23 6527.31 00:08:18.118 PCIE (0000:00:12.0) NSID 1 from core 0: 7820.93 30.55 2045.45 747.06 7043.51 00:08:18.118 PCIE (0000:00:12.0) NSID 2 from core 0: 7820.93 30.55 2045.45 740.71 6946.35 00:08:18.118 PCIE (0000:00:12.0) NSID 3 from core 0: 7820.93 30.55 2045.53 737.40 6205.09 00:08:18.118 ======================================================== 00:08:18.118 Total : 46925.56 183.30 2045.31 720.23 7043.51 00:08:18.118 00:08:18.118 Initializing NVMe Controllers 00:08:18.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.118 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.118 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.118 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:18.118 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:18.118 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:18.118 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:18.118 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:18.118 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:18.118 Initialization complete. Launching workers. 00:08:18.118 ======================================================== 00:08:18.118 Latency(us) 00:08:18.118 Device Information : IOPS MiB/s Average min max 00:08:18.118 PCIE (0000:00:10.0) NSID 1 from core 1: 7646.32 29.87 2091.13 713.25 6562.38 00:08:18.118 PCIE (0000:00:11.0) NSID 1 from core 1: 7646.32 29.87 2092.03 734.42 6857.71 00:08:18.118 PCIE (0000:00:13.0) NSID 1 from core 1: 7646.32 29.87 2091.97 666.88 6824.95 00:08:18.118 PCIE (0000:00:12.0) NSID 1 from core 1: 7646.32 29.87 2091.91 638.66 6810.83 00:08:18.118 PCIE (0000:00:12.0) NSID 2 from core 1: 7646.32 29.87 2091.86 609.61 6349.81 00:08:18.118 PCIE (0000:00:12.0) NSID 3 from core 1: 7646.32 29.87 2091.81 579.12 6604.85 00:08:18.118 ======================================================== 00:08:18.118 Total : 45877.91 179.21 2091.79 579.12 6857.71 00:08:18.118 00:08:20.030 Initializing NVMe Controllers 00:08:20.030 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.030 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.030 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.030 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.030 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:20.030 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:20.030 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:20.030 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:20.030 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:20.030 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:20.030 Initialization complete. Launching workers. 00:08:20.030 ======================================================== 00:08:20.030 Latency(us) 00:08:20.030 Device Information : IOPS MiB/s Average min max 00:08:20.030 PCIE (0000:00:10.0) NSID 1 from core 2: 4572.22 17.86 3499.63 732.22 12739.95 00:08:20.030 PCIE (0000:00:11.0) NSID 1 from core 2: 4572.22 17.86 3501.79 724.59 12758.82 00:08:20.030 PCIE (0000:00:13.0) NSID 1 from core 2: 4572.22 17.86 3501.56 741.35 12977.86 00:08:20.030 PCIE (0000:00:12.0) NSID 1 from core 2: 4572.22 17.86 3501.51 736.21 12468.47 00:08:20.030 PCIE (0000:00:12.0) NSID 2 from core 2: 4572.22 17.86 3501.45 736.65 13163.23 00:08:20.030 PCIE (0000:00:12.0) NSID 3 from core 2: 4572.22 17.86 3501.39 761.36 13232.62 00:08:20.030 ======================================================== 00:08:20.030 Total : 27433.30 107.16 3501.22 724.59 13232.62 00:08:20.030 00:08:20.030 09:40:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63822 00:08:20.030 ************************************ 00:08:20.030 END TEST nvme_multi_secondary 00:08:20.030 ************************************ 00:08:20.030 09:40:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63823 00:08:20.030 00:08:20.030 real 0m10.727s 00:08:20.030 user 0m18.405s 00:08:20.030 sys 0m0.636s 00:08:20.030 09:40:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.030 09:40:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:20.030 09:40:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:20.030 09:40:07 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:20.031 09:40:07 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62785 ]] 00:08:20.031 09:40:07 nvme -- common/autotest_common.sh@1094 -- # kill 62785 00:08:20.031 09:40:07 nvme -- common/autotest_common.sh@1095 -- # wait 62785 00:08:20.031 [2024-12-05 09:40:07.255522] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.255589] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.255616] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.255632] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.258309] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.258377] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.258395] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.258412] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.260684] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.260735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.260752] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.260770] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.262884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.262920] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.262931] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.262942] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63695) is not found. Dropping the request. 00:08:20.031 [2024-12-05 09:40:07.370279] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:20.031 09:40:07 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:20.031 09:40:07 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:20.031 09:40:07 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:20.031 09:40:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.031 09:40:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.031 09:40:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.031 ************************************ 00:08:20.031 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:20.031 ************************************ 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:20.031 * Looking for test storage... 00:08:20.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.031 --rc genhtml_branch_coverage=1 00:08:20.031 --rc genhtml_function_coverage=1 00:08:20.031 --rc genhtml_legend=1 00:08:20.031 --rc geninfo_all_blocks=1 00:08:20.031 --rc geninfo_unexecuted_blocks=1 00:08:20.031 00:08:20.031 ' 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.031 --rc genhtml_branch_coverage=1 00:08:20.031 --rc genhtml_function_coverage=1 00:08:20.031 --rc genhtml_legend=1 00:08:20.031 --rc geninfo_all_blocks=1 00:08:20.031 --rc geninfo_unexecuted_blocks=1 00:08:20.031 00:08:20.031 ' 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.031 --rc genhtml_branch_coverage=1 00:08:20.031 --rc genhtml_function_coverage=1 00:08:20.031 --rc genhtml_legend=1 00:08:20.031 --rc geninfo_all_blocks=1 00:08:20.031 --rc geninfo_unexecuted_blocks=1 00:08:20.031 00:08:20.031 ' 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.031 --rc genhtml_branch_coverage=1 00:08:20.031 --rc genhtml_function_coverage=1 00:08:20.031 --rc genhtml_legend=1 00:08:20.031 --rc geninfo_all_blocks=1 00:08:20.031 --rc geninfo_unexecuted_blocks=1 00:08:20.031 00:08:20.031 ' 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:20.031 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63985 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63985 00:08:20.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63985 ']' 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.032 09:40:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:20.291 [2024-12-05 09:40:07.667614] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:20.292 [2024-12-05 09:40:07.667734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:08:20.292 [2024-12-05 09:40:07.837878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.550 [2024-12-05 09:40:07.943713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.550 [2024-12-05 09:40:07.944021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.550 [2024-12-05 09:40:07.944304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.550 [2024-12-05 09:40:07.944406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:21.117 nvme0n1 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_GIAb5.txt 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:21.117 true 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733391608 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64008 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:21.117 09:40:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:23.028 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:23.028 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.028 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:23.028 [2024-12-05 09:40:10.629605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:23.028 [2024-12-05 09:40:10.630203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:23.028 [2024-12-05 09:40:10.630350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:23.028 [2024-12-05 09:40:10.630413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:23.028 [2024-12-05 09:40:10.632009] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:23.028 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64008 00:08:23.028 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.028 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64008 00:08:23.028 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64008 00:08:23.028 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_GIAb5.txt 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_GIAb5.txt 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63985 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63985 ']' 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63985 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63985 00:08:23.286 killing process with pid 63985 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63985' 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63985 00:08:23.286 09:40:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63985 00:08:24.666 09:40:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:24.666 09:40:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:24.666 00:08:24.666 real 0m4.643s 00:08:24.666 user 0m16.528s 00:08:24.666 sys 0m0.482s 00:08:24.666 09:40:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.666 09:40:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:24.666 ************************************ 00:08:24.666 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:24.666 ************************************ 00:08:24.666 09:40:12 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:24.666 09:40:12 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:24.667 09:40:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.667 09:40:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.667 09:40:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.667 ************************************ 00:08:24.667 START TEST nvme_fio 00:08:24.667 ************************************ 00:08:24.667 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:24.667 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:24.667 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:24.667 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:24.667 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:24.667 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:24.667 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:24.667 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:24.667 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:24.667 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:24.667 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:24.667 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:24.667 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:24.667 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:24.667 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:24.667 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:24.925 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:24.925 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:25.185 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:25.185 09:40:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:25.185 09:40:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:25.185 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:25.185 fio-3.35 00:08:25.185 Starting 1 thread 00:08:31.786 00:08:31.786 test: (groupid=0, jobs=1): err= 0: pid=64142: Thu Dec 5 09:40:18 2024 00:08:31.786 read: IOPS=24.4k, BW=95.4MiB/s (100MB/s)(191MiB/2001msec) 00:08:31.786 slat (nsec): min=3327, max=68658, avg=4976.95, stdev=2117.89 00:08:31.786 clat (usec): min=239, max=9431, avg=2616.75, stdev=764.04 00:08:31.786 lat (usec): min=245, max=9469, avg=2621.73, stdev=765.42 00:08:31.786 clat percentiles (usec): 00:08:31.786 | 1.00th=[ 1598], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2343], 00:08:31.786 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:08:31.786 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2933], 95.00th=[ 4686], 00:08:31.786 | 99.00th=[ 5800], 99.50th=[ 6259], 99.90th=[ 6587], 99.95th=[ 7504], 00:08:31.786 | 99.99th=[ 9372] 00:08:31.786 bw ( KiB/s): min=94592, max=98856, per=98.46%, avg=96154.67, stdev=2349.01, samples=3 00:08:31.786 iops : min=23648, max=24714, avg=24038.67, stdev=587.25, samples=3 00:08:31.786 write: IOPS=24.3k, BW=94.7MiB/s (99.3MB/s)(190MiB/2001msec); 0 zone resets 00:08:31.786 slat (nsec): min=3446, max=66797, avg=5237.41, stdev=2155.31 00:08:31.786 clat (usec): min=258, max=9350, avg=2623.02, stdev=776.32 00:08:31.786 lat (usec): min=264, max=9388, avg=2628.26, stdev=777.72 00:08:31.786 clat percentiles (usec): 00:08:31.786 | 1.00th=[ 1598], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2343], 00:08:31.786 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:08:31.786 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2966], 95.00th=[ 4817], 00:08:31.786 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[ 7832], 00:08:31.786 | 99.99th=[ 9241] 00:08:31.786 bw ( KiB/s): min=95000, max=98096, per=99.23%, avg=96277.33, stdev=1617.43, samples=3 00:08:31.786 iops : min=23750, max=24524, avg=24069.33, stdev=404.36, samples=3 00:08:31.786 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.04% 00:08:31.786 lat (msec) : 2=3.10%, 4=90.38%, 10=6.45% 00:08:31.786 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=606 00:08:31.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:31.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.786 issued rwts: total=48853,48535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.786 00:08:31.786 Run status group 0 (all jobs): 00:08:31.786 READ: bw=95.4MiB/s (100MB/s), 95.4MiB/s-95.4MiB/s (100MB/s-100MB/s), io=191MiB (200MB), run=2001-2001msec 00:08:31.786 WRITE: bw=94.7MiB/s (99.3MB/s), 94.7MiB/s-94.7MiB/s (99.3MB/s-99.3MB/s), io=190MiB (199MB), run=2001-2001msec 00:08:31.786 ----------------------------------------------------- 00:08:31.786 Suppressions used: 00:08:31.786 count bytes template 00:08:31.786 1 32 /usr/src/fio/parse.c 00:08:31.786 1 8 libtcmalloc_minimal.so 00:08:31.786 ----------------------------------------------------- 00:08:31.786 00:08:31.786 09:40:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:31.786 09:40:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:31.786 09:40:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:31.786 09:40:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:31.786 09:40:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:31.786 09:40:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:32.047 09:40:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:32.047 09:40:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:32.047 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:32.047 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:32.047 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:32.047 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:32.048 09:40:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:32.048 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:32.048 fio-3.35 00:08:32.048 Starting 1 thread 00:08:40.182 00:08:40.182 test: (groupid=0, jobs=1): err= 0: pid=64197: Thu Dec 5 09:40:26 2024 00:08:40.182 read: IOPS=24.9k, BW=97.2MiB/s (102MB/s)(194MiB/2001msec) 00:08:40.182 slat (nsec): min=3356, max=61395, avg=4921.00, stdev=2108.32 00:08:40.182 clat (usec): min=575, max=9442, avg=2570.38, stdev=787.76 00:08:40.182 lat (usec): min=580, max=9456, avg=2575.30, stdev=789.07 00:08:40.182 clat percentiles (usec): 00:08:40.182 | 1.00th=[ 1565], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2311], 00:08:40.182 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:08:40.182 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2868], 95.00th=[ 4178], 00:08:40.182 | 99.00th=[ 6128], 99.50th=[ 6980], 99.90th=[ 8094], 99.95th=[ 8717], 00:08:40.182 | 99.99th=[ 9372] 00:08:40.182 bw ( KiB/s): min=94264, max=104688, per=99.83%, avg=99328.00, stdev=5218.30, samples=3 00:08:40.182 iops : min=23566, max=26172, avg=24832.00, stdev=1304.58, samples=3 00:08:40.182 write: IOPS=24.7k, BW=96.6MiB/s (101MB/s)(193MiB/2001msec); 0 zone resets 00:08:40.182 slat (nsec): min=3542, max=57713, avg=5170.67, stdev=2055.35 00:08:40.182 clat (usec): min=559, max=9521, avg=2568.77, stdev=786.65 00:08:40.182 lat (usec): min=563, max=9535, avg=2573.94, stdev=787.95 00:08:40.182 clat percentiles (usec): 00:08:40.182 | 1.00th=[ 1532], 5.00th=[ 2057], 10.00th=[ 2245], 20.00th=[ 2311], 00:08:40.182 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:08:40.182 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2868], 95.00th=[ 4113], 00:08:40.182 | 99.00th=[ 6128], 99.50th=[ 6980], 99.90th=[ 8094], 99.95th=[ 8717], 00:08:40.182 | 99.99th=[ 9372] 00:08:40.182 bw ( KiB/s): min=94104, max=105728, per=100.00%, avg=99413.33, stdev=5876.85, samples=3 00:08:40.182 iops : min=23526, max=26432, avg=24853.33, stdev=1469.21, samples=3 00:08:40.182 lat (usec) : 750=0.02%, 1000=0.08% 00:08:40.182 lat (msec) : 2=4.07%, 4=90.54%, 10=5.28% 00:08:40.182 cpu : usr=99.40%, sys=0.00%, ctx=5, majf=0, minf=606 00:08:40.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:40.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:40.182 issued rwts: total=49773,49495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:40.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:40.182 00:08:40.182 Run status group 0 (all jobs): 00:08:40.182 READ: bw=97.2MiB/s (102MB/s), 97.2MiB/s-97.2MiB/s (102MB/s-102MB/s), io=194MiB (204MB), run=2001-2001msec 00:08:40.182 WRITE: bw=96.6MiB/s (101MB/s), 96.6MiB/s-96.6MiB/s (101MB/s-101MB/s), io=193MiB (203MB), run=2001-2001msec 00:08:40.182 ----------------------------------------------------- 00:08:40.182 Suppressions used: 00:08:40.182 count bytes template 00:08:40.182 1 32 /usr/src/fio/parse.c 00:08:40.182 1 8 libtcmalloc_minimal.so 00:08:40.182 ----------------------------------------------------- 00:08:40.182 00:08:40.182 09:40:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:40.182 09:40:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:40.182 09:40:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:40.182 09:40:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:40.182 09:40:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:40.182 09:40:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:40.182 09:40:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:40.182 09:40:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:40.182 09:40:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.182 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:40.182 fio-3.35 00:08:40.182 Starting 1 thread 00:08:46.782 00:08:46.782 test: (groupid=0, jobs=1): err= 0: pid=64258: Thu Dec 5 09:40:33 2024 00:08:46.782 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 00:08:46.782 slat (nsec): min=3419, max=94093, avg=5056.81, stdev=2368.05 00:08:46.782 clat (usec): min=886, max=10312, avg=2824.06, stdev=968.32 00:08:46.782 lat (usec): min=889, max=10377, avg=2829.12, stdev=969.48 00:08:46.782 clat percentiles (usec): 00:08:46.782 | 1.00th=[ 1827], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2311], 00:08:46.782 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2573], 00:08:46.782 | 70.00th=[ 2737], 80.00th=[ 3032], 90.00th=[ 4146], 95.00th=[ 5145], 00:08:46.782 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 7701], 99.95th=[ 8029], 00:08:46.782 | 99.99th=[ 8979] 00:08:46.782 bw ( KiB/s): min=82712, max=97776, per=98.40%, avg=88946.67, stdev=7860.04, samples=3 00:08:46.782 iops : min=20678, max=24444, avg=22236.67, stdev=1965.01, samples=3 00:08:46.782 write: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:08:46.782 slat (usec): min=3, max=128, avg= 5.26, stdev= 2.42 00:08:46.782 clat (usec): min=851, max=10121, avg=2831.43, stdev=970.69 00:08:46.782 lat (usec): min=856, max=10136, avg=2836.69, stdev=971.87 00:08:46.782 clat percentiles (usec): 00:08:46.782 | 1.00th=[ 1844], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2311], 00:08:46.782 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2573], 00:08:46.782 | 70.00th=[ 2737], 80.00th=[ 3032], 90.00th=[ 4178], 95.00th=[ 5145], 00:08:46.782 | 99.00th=[ 6652], 99.50th=[ 7111], 99.90th=[ 7635], 99.95th=[ 7963], 00:08:46.782 | 99.99th=[ 8717] 00:08:46.782 bw ( KiB/s): min=82744, max=98472, per=99.15%, avg=89141.33, stdev=8264.13, samples=3 00:08:46.782 iops : min=20686, max=24618, avg=22285.33, stdev=2066.03, samples=3 00:08:46.782 lat (usec) : 1000=0.02% 00:08:46.782 lat (msec) : 2=1.84%, 4=87.30%, 10=10.83%, 20=0.01% 00:08:46.782 cpu : usr=99.25%, sys=0.00%, ctx=5, majf=0, minf=606 00:08:46.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:46.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.782 issued rwts: total=45219,44973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.782 00:08:46.782 Run status group 0 (all jobs): 00:08:46.782 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:08:46.782 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:08:46.782 ----------------------------------------------------- 00:08:46.782 Suppressions used: 00:08:46.782 count bytes template 00:08:46.782 1 32 /usr/src/fio/parse.c 00:08:46.782 1 8 libtcmalloc_minimal.so 00:08:46.782 ----------------------------------------------------- 00:08:46.782 00:08:46.782 09:40:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:46.782 09:40:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:46.782 09:40:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:46.782 09:40:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:46.782 09:40:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:46.782 09:40:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:47.044 09:40:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:47.044 09:40:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:47.044 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:47.045 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:47.045 09:40:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.306 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:47.306 fio-3.35 00:08:47.306 Starting 1 thread 00:08:55.444 00:08:55.444 test: (groupid=0, jobs=1): err= 0: pid=64325: Thu Dec 5 09:40:41 2024 00:08:55.444 read: IOPS=17.3k, BW=67.8MiB/s (71.1MB/s)(136MiB/2001msec) 00:08:55.444 slat (nsec): min=3854, max=78151, avg=6092.23, stdev=3434.22 00:08:55.444 clat (usec): min=1084, max=9943, avg=3666.97, stdev=1391.00 00:08:55.444 lat (usec): min=1088, max=9950, avg=3673.06, stdev=1392.54 00:08:55.444 clat percentiles (usec): 00:08:55.444 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2638], 00:08:55.444 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 3032], 60.00th=[ 3326], 00:08:55.444 | 70.00th=[ 3982], 80.00th=[ 4948], 90.00th=[ 5932], 95.00th=[ 6587], 00:08:55.444 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 8455], 99.95th=[ 8717], 00:08:55.444 | 99.99th=[ 9634] 00:08:55.444 bw ( KiB/s): min=67360, max=78648, per=100.00%, avg=71786.67, stdev=6024.98, samples=3 00:08:55.444 iops : min=16840, max=19662, avg=17946.67, stdev=1506.25, samples=3 00:08:55.444 write: IOPS=17.4k, BW=67.8MiB/s (71.1MB/s)(136MiB/2001msec); 0 zone resets 00:08:55.444 slat (nsec): min=4013, max=84156, avg=6220.79, stdev=3411.47 00:08:55.444 clat (usec): min=1131, max=10061, avg=3682.98, stdev=1391.82 00:08:55.444 lat (usec): min=1142, max=10084, avg=3689.20, stdev=1393.32 00:08:55.444 clat percentiles (usec): 00:08:55.444 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:08:55.444 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 3064], 60.00th=[ 3326], 00:08:55.444 | 70.00th=[ 3949], 80.00th=[ 4948], 90.00th=[ 5997], 95.00th=[ 6652], 00:08:55.444 | 99.00th=[ 7570], 99.50th=[ 7898], 99.90th=[ 8586], 99.95th=[ 8717], 00:08:55.444 | 99.99th=[ 9765] 00:08:55.444 bw ( KiB/s): min=67096, max=78840, per=100.00%, avg=71736.00, stdev=6247.71, samples=3 00:08:55.444 iops : min=16774, max=19710, avg=17934.00, stdev=1561.93, samples=3 00:08:55.444 lat (msec) : 2=0.65%, 4=69.73%, 10=29.63%, 20=0.01% 00:08:55.444 cpu : usr=98.80%, sys=0.10%, ctx=4, majf=0, minf=604 00:08:55.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:55.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.444 issued rwts: total=34712,34739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.444 00:08:55.444 Run status group 0 (all jobs): 00:08:55.444 READ: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=136MiB (142MB), run=2001-2001msec 00:08:55.444 WRITE: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=136MiB (142MB), run=2001-2001msec 00:08:55.444 ----------------------------------------------------- 00:08:55.444 Suppressions used: 00:08:55.444 count bytes template 00:08:55.444 1 32 /usr/src/fio/parse.c 00:08:55.444 1 8 libtcmalloc_minimal.so 00:08:55.444 ----------------------------------------------------- 00:08:55.444 00:08:55.444 09:40:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:55.444 09:40:42 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:55.444 00:08:55.444 real 0m30.115s 00:08:55.444 user 0m19.068s 00:08:55.444 sys 0m19.546s 00:08:55.444 ************************************ 00:08:55.444 END TEST nvme_fio 00:08:55.444 ************************************ 00:08:55.444 09:40:42 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.444 09:40:42 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:55.444 00:08:55.444 real 1m39.016s 00:08:55.444 user 3m39.385s 00:08:55.444 sys 0m29.848s 00:08:55.444 09:40:42 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.444 09:40:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.444 ************************************ 00:08:55.444 END TEST nvme 00:08:55.444 ************************************ 00:08:55.444 09:40:42 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:55.444 09:40:42 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:55.444 09:40:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.444 09:40:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.444 09:40:42 -- common/autotest_common.sh@10 -- # set +x 00:08:55.444 ************************************ 00:08:55.444 START TEST nvme_scc 00:08:55.444 ************************************ 00:08:55.444 09:40:42 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:55.444 * Looking for test storage... 00:08:55.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:55.444 09:40:42 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.444 09:40:42 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.444 09:40:42 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.444 09:40:42 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.444 09:40:42 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:55.445 09:40:42 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.445 09:40:42 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.445 --rc genhtml_branch_coverage=1 00:08:55.445 --rc genhtml_function_coverage=1 00:08:55.445 --rc genhtml_legend=1 00:08:55.445 --rc geninfo_all_blocks=1 00:08:55.445 --rc geninfo_unexecuted_blocks=1 00:08:55.445 00:08:55.445 ' 00:08:55.445 09:40:42 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.445 --rc genhtml_branch_coverage=1 00:08:55.445 --rc genhtml_function_coverage=1 00:08:55.445 --rc genhtml_legend=1 00:08:55.445 --rc geninfo_all_blocks=1 00:08:55.445 --rc geninfo_unexecuted_blocks=1 00:08:55.445 00:08:55.445 ' 00:08:55.445 09:40:42 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.445 --rc genhtml_branch_coverage=1 00:08:55.445 --rc genhtml_function_coverage=1 00:08:55.445 --rc genhtml_legend=1 00:08:55.445 --rc geninfo_all_blocks=1 00:08:55.445 --rc geninfo_unexecuted_blocks=1 00:08:55.445 00:08:55.445 ' 00:08:55.445 09:40:42 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.445 --rc genhtml_branch_coverage=1 00:08:55.445 --rc genhtml_function_coverage=1 00:08:55.445 --rc genhtml_legend=1 00:08:55.445 --rc geninfo_all_blocks=1 00:08:55.445 --rc geninfo_unexecuted_blocks=1 00:08:55.445 00:08:55.445 ' 00:08:55.445 09:40:42 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.445 09:40:42 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.445 09:40:42 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.445 09:40:42 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.445 09:40:42 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.445 09:40:42 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:55.445 09:40:42 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:55.445 09:40:42 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:55.445 09:40:42 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.445 09:40:42 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:55.445 09:40:42 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:55.445 09:40:42 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:55.445 09:40:42 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:55.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.445 Waiting for block devices as requested 00:08:55.445 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.445 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.445 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.445 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.747 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:00.747 09:40:48 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:00.747 09:40:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:00.747 09:40:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:00.747 09:40:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.747 09:40:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:00.747 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.748 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:00.749 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.750 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.751 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.752 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:00.753 09:40:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:00.753 09:40:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:00.753 09:40:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.753 09:40:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:00.753 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.754 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.755 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:00.756 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.757 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.758 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:00.759 09:40:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:00.759 09:40:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:00.759 09:40:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.759 09:40:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.759 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.760 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.761 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:00.762 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.763 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.764 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.765 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.766 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.767 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:00.768 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:01.033 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:01.034 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:01.035 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:01.036 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:01.037 09:40:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:01.037 09:40:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:01.037 09:40:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:01.037 09:40:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.037 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.038 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:01.039 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:01.040 09:40:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:01.040 09:40:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:01.041 09:40:48 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:01.041 09:40:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:01.041 09:40:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:01.041 09:40:48 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:01.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:01.874 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.874 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.874 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.874 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.874 09:40:49 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:01.874 09:40:49 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:01.874 09:40:49 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.874 09:40:49 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:01.874 ************************************ 00:09:01.874 START TEST nvme_simple_copy 00:09:01.874 ************************************ 00:09:01.874 09:40:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:02.135 Initializing NVMe Controllers 00:09:02.135 Attaching to 0000:00:10.0 00:09:02.135 Controller supports SCC. Attached to 0000:00:10.0 00:09:02.135 Namespace ID: 1 size: 6GB 00:09:02.135 Initialization complete. 00:09:02.135 00:09:02.135 Controller QEMU NVMe Ctrl (12340 ) 00:09:02.135 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:02.135 Namespace Block Size:4096 00:09:02.135 Writing LBAs 0 to 63 with Random Data 00:09:02.135 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:02.135 LBAs matching Written Data: 64 00:09:02.135 00:09:02.135 real 0m0.257s 00:09:02.135 user 0m0.088s 00:09:02.135 sys 0m0.068s 00:09:02.135 09:40:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.135 09:40:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:02.135 ************************************ 00:09:02.135 END TEST nvme_simple_copy 00:09:02.135 ************************************ 00:09:02.135 00:09:02.135 real 0m7.481s 00:09:02.135 user 0m1.009s 00:09:02.135 sys 0m1.297s 00:09:02.135 09:40:49 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.135 09:40:49 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:02.135 ************************************ 00:09:02.135 END TEST nvme_scc 00:09:02.135 ************************************ 00:09:02.395 09:40:49 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:02.395 09:40:49 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:02.395 09:40:49 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:02.395 09:40:49 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:02.395 09:40:49 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:02.395 09:40:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.395 09:40:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.395 09:40:49 -- common/autotest_common.sh@10 -- # set +x 00:09:02.395 ************************************ 00:09:02.395 START TEST nvme_fdp 00:09:02.395 ************************************ 00:09:02.395 09:40:49 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:02.395 * Looking for test storage... 00:09:02.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:02.395 09:40:49 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.395 09:40:49 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.395 09:40:49 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.396 09:40:49 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:02.396 09:40:49 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.396 09:40:49 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.396 --rc genhtml_branch_coverage=1 00:09:02.396 --rc genhtml_function_coverage=1 00:09:02.396 --rc genhtml_legend=1 00:09:02.396 --rc geninfo_all_blocks=1 00:09:02.396 --rc geninfo_unexecuted_blocks=1 00:09:02.396 00:09:02.396 ' 00:09:02.396 09:40:49 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.396 --rc genhtml_branch_coverage=1 00:09:02.396 --rc genhtml_function_coverage=1 00:09:02.396 --rc genhtml_legend=1 00:09:02.396 --rc geninfo_all_blocks=1 00:09:02.396 --rc geninfo_unexecuted_blocks=1 00:09:02.396 00:09:02.396 ' 00:09:02.396 09:40:49 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.396 --rc genhtml_branch_coverage=1 00:09:02.396 --rc genhtml_function_coverage=1 00:09:02.396 --rc genhtml_legend=1 00:09:02.396 --rc geninfo_all_blocks=1 00:09:02.396 --rc geninfo_unexecuted_blocks=1 00:09:02.396 00:09:02.396 ' 00:09:02.396 09:40:49 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.396 --rc genhtml_branch_coverage=1 00:09:02.396 --rc genhtml_function_coverage=1 00:09:02.396 --rc genhtml_legend=1 00:09:02.396 --rc geninfo_all_blocks=1 00:09:02.396 --rc geninfo_unexecuted_blocks=1 00:09:02.396 00:09:02.396 ' 00:09:02.396 09:40:49 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.396 09:40:49 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.396 09:40:49 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.396 09:40:49 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.396 09:40:49 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.396 09:40:49 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:02.396 09:40:49 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:02.396 09:40:49 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:02.396 09:40:49 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.396 09:40:49 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:02.658 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:02.917 Waiting for block devices as requested 00:09:02.917 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.917 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.917 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:03.178 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.484 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:08.484 09:40:55 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:08.484 09:40:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:08.484 09:40:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:08.484 09:40:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:08.484 09:40:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.484 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:08.485 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.486 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.487 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.488 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.489 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:08.490 09:40:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:08.491 09:40:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:08.491 09:40:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:08.491 09:40:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:08.491 09:40:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.491 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.492 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:08.493 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.494 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.495 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:08.496 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:08.497 09:40:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:08.497 09:40:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:08.497 09:40:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:08.497 09:40:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.497 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.498 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.499 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.500 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:08.501 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.502 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:08.503 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.504 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.505 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.506 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:08.507 09:40:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:08.508 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:08.771 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:08.772 09:40:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:08.772 09:40:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:08.772 09:40:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:08.772 09:40:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.772 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:08.773 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.774 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:08.775 09:40:56 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:08.775 09:40:56 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:08.775 09:40:56 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:08.775 09:40:56 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:09.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.605 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.605 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.605 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.605 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.605 09:40:57 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:09.605 09:40:57 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:09.605 09:40:57 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.605 09:40:57 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:09.605 ************************************ 00:09:09.605 START TEST nvme_flexible_data_placement 00:09:09.605 ************************************ 00:09:09.605 09:40:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:09.864 Initializing NVMe Controllers 00:09:09.864 Attaching to 0000:00:13.0 00:09:09.864 Controller supports FDP Attached to 0000:00:13.0 00:09:09.864 Namespace ID: 1 Endurance Group ID: 1 00:09:09.864 Initialization complete. 00:09:09.864 00:09:09.864 ================================== 00:09:09.864 == FDP tests for Namespace: #01 == 00:09:09.864 ================================== 00:09:09.864 00:09:09.864 Get Feature: FDP: 00:09:09.864 ================= 00:09:09.864 Enabled: Yes 00:09:09.864 FDP configuration Index: 0 00:09:09.864 00:09:09.864 FDP configurations log page 00:09:09.864 =========================== 00:09:09.864 Number of FDP configurations: 1 00:09:09.864 Version: 0 00:09:09.864 Size: 112 00:09:09.864 FDP Configuration Descriptor: 0 00:09:09.864 Descriptor Size: 96 00:09:09.864 Reclaim Group Identifier format: 2 00:09:09.864 FDP Volatile Write Cache: Not Present 00:09:09.865 FDP Configuration: Valid 00:09:09.865 Vendor Specific Size: 0 00:09:09.865 Number of Reclaim Groups: 2 00:09:09.865 Number of Recalim Unit Handles: 8 00:09:09.865 Max Placement Identifiers: 128 00:09:09.865 Number of Namespaces Suppprted: 256 00:09:09.865 Reclaim unit Nominal Size: 6000000 bytes 00:09:09.865 Estimated Reclaim Unit Time Limit: Not Reported 00:09:09.865 RUH Desc #000: RUH Type: Initially Isolated 00:09:09.865 RUH Desc #001: RUH Type: Initially Isolated 00:09:09.865 RUH Desc #002: RUH Type: Initially Isolated 00:09:09.865 RUH Desc #003: RUH Type: Initially Isolated 00:09:09.865 RUH Desc #004: RUH Type: Initially Isolated 00:09:09.865 RUH Desc #005: RUH Type: Initially Isolated 00:09:09.865 RUH Desc #006: RUH Type: Initially Isolated 00:09:09.865 RUH Desc #007: RUH Type: Initially Isolated 00:09:09.865 00:09:09.865 FDP reclaim unit handle usage log page 00:09:09.865 ====================================== 00:09:09.865 Number of Reclaim Unit Handles: 8 00:09:09.865 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:09.865 RUH Usage Desc #001: RUH Attributes: Unused 00:09:09.865 RUH Usage Desc #002: RUH Attributes: Unused 00:09:09.865 RUH Usage Desc #003: RUH Attributes: Unused 00:09:09.865 RUH Usage Desc #004: RUH Attributes: Unused 00:09:09.865 RUH Usage Desc #005: RUH Attributes: Unused 00:09:09.865 RUH Usage Desc #006: RUH Attributes: Unused 00:09:09.865 RUH Usage Desc #007: RUH Attributes: Unused 00:09:09.865 00:09:09.865 FDP statistics log page 00:09:09.865 ======================= 00:09:09.865 Host bytes with metadata written: 1040113664 00:09:09.865 Media bytes with metadata written: 1040207872 00:09:09.865 Media bytes erased: 0 00:09:09.865 00:09:09.865 FDP Reclaim unit handle status 00:09:09.865 ============================== 00:09:09.865 Number of RUHS descriptors: 2 00:09:09.865 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004012 00:09:09.865 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:09.865 00:09:09.865 FDP write on placement id: 0 success 00:09:09.865 00:09:09.865 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:09.865 00:09:09.865 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:09.865 00:09:09.865 Get Feature: FDP Events for Placement handle: #0 00:09:09.865 ======================== 00:09:09.865 Number of FDP Events: 6 00:09:09.865 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:09.865 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:09.865 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:09.865 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:09.865 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:09.865 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:09.865 00:09:09.865 FDP events log page 00:09:09.865 =================== 00:09:09.865 Number of FDP events: 1 00:09:09.865 FDP Event #0: 00:09:09.865 Event Type: RU Not Written to Capacity 00:09:09.865 Placement Identifier: Valid 00:09:09.865 NSID: Valid 00:09:09.865 Location: Valid 00:09:09.865 Placement Identifier: 0 00:09:09.865 Event Timestamp: 6 00:09:09.865 Namespace Identifier: 1 00:09:09.865 Reclaim Group Identifier: 0 00:09:09.865 Reclaim Unit Handle Identifier: 0 00:09:09.865 00:09:09.865 FDP test passed 00:09:09.865 00:09:09.865 real 0m0.224s 00:09:09.865 user 0m0.077s 00:09:09.865 sys 0m0.047s 00:09:09.865 09:40:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.865 ************************************ 00:09:09.865 END TEST nvme_flexible_data_placement 00:09:09.865 ************************************ 00:09:09.865 09:40:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:09.865 ************************************ 00:09:09.865 END TEST nvme_fdp 00:09:09.865 ************************************ 00:09:09.865 00:09:09.865 real 0m7.572s 00:09:09.865 user 0m1.165s 00:09:09.865 sys 0m1.280s 00:09:09.865 09:40:57 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.865 09:40:57 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:09.865 09:40:57 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:09.865 09:40:57 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:09.865 09:40:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.865 09:40:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.865 09:40:57 -- common/autotest_common.sh@10 -- # set +x 00:09:09.865 ************************************ 00:09:09.865 START TEST nvme_rpc 00:09:09.865 ************************************ 00:09:09.865 09:40:57 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:09.865 * Looking for test storage... 00:09:09.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:09.865 09:40:57 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.865 09:40:57 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.865 09:40:57 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.125 09:40:57 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.125 09:40:57 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:10.125 09:40:57 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.125 09:40:57 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.125 --rc genhtml_branch_coverage=1 00:09:10.125 --rc genhtml_function_coverage=1 00:09:10.125 --rc genhtml_legend=1 00:09:10.125 --rc geninfo_all_blocks=1 00:09:10.125 --rc geninfo_unexecuted_blocks=1 00:09:10.125 00:09:10.126 ' 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.126 --rc genhtml_branch_coverage=1 00:09:10.126 --rc genhtml_function_coverage=1 00:09:10.126 --rc genhtml_legend=1 00:09:10.126 --rc geninfo_all_blocks=1 00:09:10.126 --rc geninfo_unexecuted_blocks=1 00:09:10.126 00:09:10.126 ' 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.126 --rc genhtml_branch_coverage=1 00:09:10.126 --rc genhtml_function_coverage=1 00:09:10.126 --rc genhtml_legend=1 00:09:10.126 --rc geninfo_all_blocks=1 00:09:10.126 --rc geninfo_unexecuted_blocks=1 00:09:10.126 00:09:10.126 ' 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.126 --rc genhtml_branch_coverage=1 00:09:10.126 --rc genhtml_function_coverage=1 00:09:10.126 --rc genhtml_legend=1 00:09:10.126 --rc geninfo_all_blocks=1 00:09:10.126 --rc geninfo_unexecuted_blocks=1 00:09:10.126 00:09:10.126 ' 00:09:10.126 09:40:57 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.126 09:40:57 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:10.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.126 09:40:57 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:10.126 09:40:57 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65700 00:09:10.126 09:40:57 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:10.126 09:40:57 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:10.126 09:40:57 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65700 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65700 ']' 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.126 09:40:57 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.126 [2024-12-05 09:40:57.668283] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:10.126 [2024-12-05 09:40:57.668561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65700 ] 00:09:10.386 [2024-12-05 09:40:57.829046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:10.386 [2024-12-05 09:40:57.925453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.386 [2024-12-05 09:40:57.925536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.955 09:40:58 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.955 09:40:58 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:10.955 09:40:58 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:11.215 Nvme0n1 00:09:11.215 09:40:58 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:11.215 09:40:58 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:11.475 request: 00:09:11.475 { 00:09:11.475 "bdev_name": "Nvme0n1", 00:09:11.475 "filename": "non_existing_file", 00:09:11.475 "method": "bdev_nvme_apply_firmware", 00:09:11.475 "req_id": 1 00:09:11.475 } 00:09:11.475 Got JSON-RPC error response 00:09:11.475 response: 00:09:11.475 { 00:09:11.475 "code": -32603, 00:09:11.475 "message": "open file failed." 00:09:11.475 } 00:09:11.475 09:40:58 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:11.475 09:40:58 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:11.475 09:40:58 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:11.735 09:40:59 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:11.735 09:40:59 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65700 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65700 ']' 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65700 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65700 00:09:11.735 killing process with pid 65700 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65700' 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65700 00:09:11.735 09:40:59 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65700 00:09:13.118 ************************************ 00:09:13.118 END TEST nvme_rpc 00:09:13.118 ************************************ 00:09:13.118 00:09:13.118 real 0m3.087s 00:09:13.118 user 0m5.904s 00:09:13.118 sys 0m0.483s 00:09:13.118 09:41:00 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.118 09:41:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.118 09:41:00 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:13.118 09:41:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.118 09:41:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.118 09:41:00 -- common/autotest_common.sh@10 -- # set +x 00:09:13.118 ************************************ 00:09:13.118 START TEST nvme_rpc_timeouts 00:09:13.118 ************************************ 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:13.118 * Looking for test storage... 00:09:13.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.118 09:41:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.118 --rc genhtml_branch_coverage=1 00:09:13.118 --rc genhtml_function_coverage=1 00:09:13.118 --rc genhtml_legend=1 00:09:13.118 --rc geninfo_all_blocks=1 00:09:13.118 --rc geninfo_unexecuted_blocks=1 00:09:13.118 00:09:13.118 ' 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:13.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.118 --rc genhtml_branch_coverage=1 00:09:13.118 --rc genhtml_function_coverage=1 00:09:13.118 --rc genhtml_legend=1 00:09:13.118 --rc geninfo_all_blocks=1 00:09:13.118 --rc geninfo_unexecuted_blocks=1 00:09:13.118 00:09:13.118 ' 00:09:13.118 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:13.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.119 --rc genhtml_branch_coverage=1 00:09:13.119 --rc genhtml_function_coverage=1 00:09:13.119 --rc genhtml_legend=1 00:09:13.119 --rc geninfo_all_blocks=1 00:09:13.119 --rc geninfo_unexecuted_blocks=1 00:09:13.119 00:09:13.119 ' 00:09:13.119 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:13.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.119 --rc genhtml_branch_coverage=1 00:09:13.119 --rc genhtml_function_coverage=1 00:09:13.119 --rc genhtml_legend=1 00:09:13.119 --rc geninfo_all_blocks=1 00:09:13.119 --rc geninfo_unexecuted_blocks=1 00:09:13.119 00:09:13.119 ' 00:09:13.119 09:41:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.119 09:41:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65765 00:09:13.119 09:41:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65765 00:09:13.119 09:41:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65797 00:09:13.119 09:41:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:13.119 09:41:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:13.119 09:41:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65797 00:09:13.119 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65797 ']' 00:09:13.119 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.119 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.119 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.119 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.119 09:41:00 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:13.119 [2024-12-05 09:41:00.714391] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:13.119 [2024-12-05 09:41:00.714486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65797 ] 00:09:13.380 [2024-12-05 09:41:00.870310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:13.380 [2024-12-05 09:41:00.965381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.380 [2024-12-05 09:41:00.965400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.952 Checking default timeout settings: 00:09:13.952 09:41:01 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.952 09:41:01 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:13.952 09:41:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:13.952 09:41:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:14.524 Making settings changes with rpc: 00:09:14.524 09:41:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:14.524 09:41:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:14.524 Check default vs. modified settings: 00:09:14.524 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:14.524 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65765 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65765 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:14.785 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:15.045 Setting action_on_timeout is changed as expected. 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65765 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65765 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:15.045 Setting timeout_us is changed as expected. 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65765 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:15.045 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65765 00:09:15.046 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:15.046 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:15.046 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:15.046 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:15.046 Setting timeout_admin_us is changed as expected. 00:09:15.046 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:15.046 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:15.046 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65765 /tmp/settings_modified_65765 00:09:15.046 09:41:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65797 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65797 ']' 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65797 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65797 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.046 killing process with pid 65797 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65797' 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65797 00:09:15.046 09:41:02 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65797 00:09:16.433 RPC TIMEOUT SETTING TEST PASSED. 00:09:16.433 09:41:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:16.433 00:09:16.433 real 0m3.205s 00:09:16.433 user 0m6.296s 00:09:16.433 sys 0m0.467s 00:09:16.433 09:41:03 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.433 09:41:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:16.433 ************************************ 00:09:16.433 END TEST nvme_rpc_timeouts 00:09:16.433 ************************************ 00:09:16.433 09:41:03 -- spdk/autotest.sh@239 -- # uname -s 00:09:16.433 09:41:03 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:16.433 09:41:03 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:16.433 09:41:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.433 09:41:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.433 09:41:03 -- common/autotest_common.sh@10 -- # set +x 00:09:16.433 ************************************ 00:09:16.433 START TEST sw_hotplug 00:09:16.433 ************************************ 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:16.433 * Looking for test storage... 00:09:16.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.433 09:41:03 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.433 --rc genhtml_branch_coverage=1 00:09:16.433 --rc genhtml_function_coverage=1 00:09:16.433 --rc genhtml_legend=1 00:09:16.433 --rc geninfo_all_blocks=1 00:09:16.433 --rc geninfo_unexecuted_blocks=1 00:09:16.433 00:09:16.433 ' 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.433 --rc genhtml_branch_coverage=1 00:09:16.433 --rc genhtml_function_coverage=1 00:09:16.433 --rc genhtml_legend=1 00:09:16.433 --rc geninfo_all_blocks=1 00:09:16.433 --rc geninfo_unexecuted_blocks=1 00:09:16.433 00:09:16.433 ' 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.433 --rc genhtml_branch_coverage=1 00:09:16.433 --rc genhtml_function_coverage=1 00:09:16.433 --rc genhtml_legend=1 00:09:16.433 --rc geninfo_all_blocks=1 00:09:16.433 --rc geninfo_unexecuted_blocks=1 00:09:16.433 00:09:16.433 ' 00:09:16.433 09:41:03 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.433 --rc genhtml_branch_coverage=1 00:09:16.434 --rc genhtml_function_coverage=1 00:09:16.434 --rc genhtml_legend=1 00:09:16.434 --rc geninfo_all_blocks=1 00:09:16.434 --rc geninfo_unexecuted_blocks=1 00:09:16.434 00:09:16.434 ' 00:09:16.434 09:41:03 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:16.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:16.695 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:16.695 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:16.695 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:16.695 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:16.695 09:41:04 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:16.695 09:41:04 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:16.695 09:41:04 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:16.695 09:41:04 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:16.695 09:41:04 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:16.956 09:41:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:16.957 09:41:04 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:16.957 09:41:04 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:16.957 09:41:04 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:16.957 09:41:04 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:17.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:17.218 Waiting for block devices as requested 00:09:17.218 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.479 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.479 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.479 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.850 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:22.850 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:22.850 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:22.850 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:22.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.850 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:23.110 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:23.372 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:23.372 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:23.372 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:23.372 09:41:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:23.372 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:23.372 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:23.372 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66648 00:09:23.372 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:23.372 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:23.372 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:23.372 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:23.372 09:41:10 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:23.372 09:41:10 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:23.372 09:41:10 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:23.372 09:41:10 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:23.634 09:41:10 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:23.634 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:23.634 09:41:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:23.634 09:41:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:23.634 09:41:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:23.634 09:41:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:23.634 Initializing NVMe Controllers 00:09:23.634 Attaching to 0000:00:10.0 00:09:23.634 Attaching to 0000:00:11.0 00:09:23.634 Attached to 0000:00:10.0 00:09:23.634 Attached to 0000:00:11.0 00:09:23.634 Initialization complete. Starting I/O... 00:09:23.634 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:23.634 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:23.634 00:09:24.577 QEMU NVMe Ctrl (12340 ): 2672 I/Os completed (+2672) 00:09:24.577 QEMU NVMe Ctrl (12341 ): 2670 I/Os completed (+2670) 00:09:24.577 00:09:25.963 QEMU NVMe Ctrl (12340 ): 5772 I/Os completed (+3100) 00:09:25.963 QEMU NVMe Ctrl (12341 ): 5762 I/Os completed (+3092) 00:09:25.963 00:09:26.909 QEMU NVMe Ctrl (12340 ): 8942 I/Os completed (+3170) 00:09:26.909 QEMU NVMe Ctrl (12341 ): 8953 I/Os completed (+3191) 00:09:26.909 00:09:27.850 QEMU NVMe Ctrl (12340 ): 12428 I/Os completed (+3486) 00:09:27.850 QEMU NVMe Ctrl (12341 ): 12441 I/Os completed (+3488) 00:09:27.850 00:09:28.788 QEMU NVMe Ctrl (12340 ): 16125 I/Os completed (+3697) 00:09:28.788 QEMU NVMe Ctrl (12341 ): 16146 I/Os completed (+3705) 00:09:28.788 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:29.731 [2024-12-05 09:41:17.006560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:29.731 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:29.731 [2024-12-05 09:41:17.007483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.007538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.007555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.007570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:29.731 [2024-12-05 09:41:17.009142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.009182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.009195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.009206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:29.731 [2024-12-05 09:41:17.025942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:29.731 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:29.731 [2024-12-05 09:41:17.026961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.027065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.027089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.027103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:29.731 [2024-12-05 09:41:17.028475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.028499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.028523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 [2024-12-05 09:41:17.028535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:29.731 EAL: Cannot open sysfs resource 00:09:29.731 EAL: pci_scan_one(): cannot parse resource 00:09:29.731 EAL: Scan for (pci) bus failed. 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:29.731 Attaching to 0000:00:10.0 00:09:29.731 Attached to 0000:00:10.0 00:09:29.731 QEMU NVMe Ctrl (12340 ): 32 I/Os completed (+32) 00:09:29.731 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:29.731 09:41:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:29.731 Attaching to 0000:00:11.0 00:09:29.731 Attached to 0000:00:11.0 00:09:30.676 QEMU NVMe Ctrl (12340 ): 3565 I/Os completed (+3533) 00:09:30.676 QEMU NVMe Ctrl (12341 ): 3313 I/Os completed (+3313) 00:09:30.676 00:09:31.620 QEMU NVMe Ctrl (12340 ): 6677 I/Os completed (+3112) 00:09:31.620 QEMU NVMe Ctrl (12341 ): 6366 I/Os completed (+3053) 00:09:31.620 00:09:33.007 QEMU NVMe Ctrl (12340 ): 10347 I/Os completed (+3670) 00:09:33.007 QEMU NVMe Ctrl (12341 ): 10033 I/Os completed (+3667) 00:09:33.007 00:09:33.580 QEMU NVMe Ctrl (12340 ): 14053 I/Os completed (+3706) 00:09:33.580 QEMU NVMe Ctrl (12341 ): 13735 I/Os completed (+3702) 00:09:33.580 00:09:34.966 QEMU NVMe Ctrl (12340 ): 17765 I/Os completed (+3712) 00:09:34.966 QEMU NVMe Ctrl (12341 ): 17443 I/Os completed (+3708) 00:09:34.966 00:09:35.909 QEMU NVMe Ctrl (12340 ): 21481 I/Os completed (+3716) 00:09:35.909 QEMU NVMe Ctrl (12341 ): 21151 I/Os completed (+3708) 00:09:35.909 00:09:36.853 QEMU NVMe Ctrl (12340 ): 25245 I/Os completed (+3764) 00:09:36.853 QEMU NVMe Ctrl (12341 ): 24920 I/Os completed (+3769) 00:09:36.853 00:09:37.797 QEMU NVMe Ctrl (12340 ): 28888 I/Os completed (+3643) 00:09:37.797 QEMU NVMe Ctrl (12341 ): 28583 I/Os completed (+3663) 00:09:37.797 00:09:38.788 QEMU NVMe Ctrl (12340 ): 32853 I/Os completed (+3965) 00:09:38.788 QEMU NVMe Ctrl (12341 ): 32535 I/Os completed (+3952) 00:09:38.788 00:09:39.728 QEMU NVMe Ctrl (12340 ): 36572 I/Os completed (+3719) 00:09:39.728 QEMU NVMe Ctrl (12341 ): 36244 I/Os completed (+3709) 00:09:39.728 00:09:40.668 QEMU NVMe Ctrl (12340 ): 39718 I/Os completed (+3146) 00:09:40.668 QEMU NVMe Ctrl (12341 ): 39307 I/Os completed (+3063) 00:09:40.668 00:09:41.608 QEMU NVMe Ctrl (12340 ): 43251 I/Os completed (+3533) 00:09:41.608 QEMU NVMe Ctrl (12341 ): 42763 I/Os completed (+3456) 00:09:41.608 00:09:41.868 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:41.869 [2024-12-05 09:41:29.270780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:41.869 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:41.869 [2024-12-05 09:41:29.271767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.271896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.271930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.271991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:41.869 [2024-12-05 09:41:29.273642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.273702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.273728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.273752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:41.869 [2024-12-05 09:41:29.290244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:41.869 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:41.869 [2024-12-05 09:41:29.291140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.291232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.291293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.291318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:41.869 [2024-12-05 09:41:29.292740] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.292824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.292851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 [2024-12-05 09:41:29.292898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.869 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:41.869 EAL: Scan for (pci) bus failed. 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:41.869 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:41.869 Attaching to 0000:00:10.0 00:09:41.869 Attached to 0000:00:10.0 00:09:42.129 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:42.129 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:42.129 09:41:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:42.129 Attaching to 0000:00:11.0 00:09:42.129 Attached to 0000:00:11.0 00:09:42.700 QEMU NVMe Ctrl (12340 ): 2721 I/Os completed (+2721) 00:09:42.700 QEMU NVMe Ctrl (12341 ): 2453 I/Os completed (+2453) 00:09:42.700 00:09:43.644 QEMU NVMe Ctrl (12340 ): 6495 I/Os completed (+3774) 00:09:43.644 QEMU NVMe Ctrl (12341 ): 6213 I/Os completed (+3760) 00:09:43.644 00:09:44.588 QEMU NVMe Ctrl (12340 ): 10145 I/Os completed (+3650) 00:09:44.589 QEMU NVMe Ctrl (12341 ): 9865 I/Os completed (+3652) 00:09:44.589 00:09:45.976 QEMU NVMe Ctrl (12340 ): 13978 I/Os completed (+3833) 00:09:45.976 QEMU NVMe Ctrl (12341 ): 13681 I/Os completed (+3816) 00:09:45.976 00:09:46.919 QEMU NVMe Ctrl (12340 ): 17708 I/Os completed (+3730) 00:09:46.919 QEMU NVMe Ctrl (12341 ): 17423 I/Os completed (+3742) 00:09:46.919 00:09:47.862 QEMU NVMe Ctrl (12340 ): 21318 I/Os completed (+3610) 00:09:47.862 QEMU NVMe Ctrl (12341 ): 21016 I/Os completed (+3593) 00:09:47.862 00:09:48.804 QEMU NVMe Ctrl (12340 ): 25111 I/Os completed (+3793) 00:09:48.804 QEMU NVMe Ctrl (12341 ): 24811 I/Os completed (+3795) 00:09:48.804 00:09:49.750 QEMU NVMe Ctrl (12340 ): 28784 I/Os completed (+3673) 00:09:49.750 QEMU NVMe Ctrl (12341 ): 28485 I/Os completed (+3674) 00:09:49.750 00:09:50.693 QEMU NVMe Ctrl (12340 ): 32621 I/Os completed (+3837) 00:09:50.693 QEMU NVMe Ctrl (12341 ): 32326 I/Os completed (+3841) 00:09:50.693 00:09:51.636 QEMU NVMe Ctrl (12340 ): 36397 I/Os completed (+3776) 00:09:51.637 QEMU NVMe Ctrl (12341 ): 36099 I/Os completed (+3773) 00:09:51.637 00:09:52.580 QEMU NVMe Ctrl (12340 ): 40083 I/Os completed (+3686) 00:09:52.580 QEMU NVMe Ctrl (12341 ): 39774 I/Os completed (+3675) 00:09:52.580 00:09:53.964 QEMU NVMe Ctrl (12340 ): 43623 I/Os completed (+3540) 00:09:53.964 QEMU NVMe Ctrl (12341 ): 43408 I/Os completed (+3634) 00:09:53.964 00:09:53.964 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:53.964 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:53.964 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:53.964 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:53.964 [2024-12-05 09:41:41.521160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:53.964 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:53.964 [2024-12-05 09:41:41.522621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.964 [2024-12-05 09:41:41.522694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.964 [2024-12-05 09:41:41.522713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.964 [2024-12-05 09:41:41.522732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.964 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:53.964 [2024-12-05 09:41:41.525670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.964 [2024-12-05 09:41:41.525742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.964 [2024-12-05 09:41:41.525758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 [2024-12-05 09:41:41.525775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:53.965 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:53.965 [2024-12-05 09:41:41.541849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:53.965 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:53.965 [2024-12-05 09:41:41.543062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 [2024-12-05 09:41:41.543128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 [2024-12-05 09:41:41.543149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 [2024-12-05 09:41:41.543165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:53.965 [2024-12-05 09:41:41.545264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 [2024-12-05 09:41:41.545322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 [2024-12-05 09:41:41.545342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 [2024-12-05 09:41:41.545359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.965 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:53.965 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:54.225 Attaching to 0000:00:10.0 00:09:54.225 Attached to 0000:00:10.0 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:54.225 09:41:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:54.225 Attaching to 0000:00:11.0 00:09:54.225 Attached to 0000:00:11.0 00:09:54.225 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:54.225 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:54.225 [2024-12-05 09:41:41.851083] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:06.489 09:41:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:06.489 09:41:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:06.489 09:41:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.84 00:10:06.489 09:41:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.84 00:10:06.489 09:41:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:06.489 09:41:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.84 00:10:06.489 09:41:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.84 2 00:10:06.489 remove_attach_helper took 42.84s to complete (handling 2 nvme drive(s)) 09:41:53 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66648 00:10:13.073 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66648) - No such process 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66648 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67198 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:13.073 09:41:59 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67198 00:10:13.073 09:41:59 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67198 ']' 00:10:13.073 09:41:59 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.073 09:41:59 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.073 09:41:59 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.073 09:41:59 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.073 09:41:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:13.073 [2024-12-05 09:41:59.951300] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:13.073 [2024-12-05 09:41:59.951688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67198 ] 00:10:13.073 [2024-12-05 09:42:00.118573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.073 [2024-12-05 09:42:00.244371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.334 09:42:00 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.334 09:42:00 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:13.334 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:13.334 09:42:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.334 09:42:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:13.334 09:42:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.335 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:13.335 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:13.335 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:13.335 09:42:00 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:13.335 09:42:00 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:13.335 09:42:00 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:13.335 09:42:00 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:13.335 09:42:00 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:13.335 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:13.335 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:13.335 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:13.335 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:13.335 09:42:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:19.925 09:42:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:19.925 09:42:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.925 09:42:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.925 09:42:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.925 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:19.925 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:19.925 [2024-12-05 09:42:07.044864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:19.925 [2024-12-05 09:42:07.046080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.925 [2024-12-05 09:42:07.046118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.925 [2024-12-05 09:42:07.046131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.925 [2024-12-05 09:42:07.046149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.925 [2024-12-05 09:42:07.046156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.925 [2024-12-05 09:42:07.046165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.925 [2024-12-05 09:42:07.046172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.925 [2024-12-05 09:42:07.046180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.925 [2024-12-05 09:42:07.046186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.925 [2024-12-05 09:42:07.046197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.925 [2024-12-05 09:42:07.046203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.925 [2024-12-05 09:42:07.046211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.925 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:19.925 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:19.925 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:19.925 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:19.925 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:19.925 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:19.925 09:42:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.925 09:42:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.925 [2024-12-05 09:42:07.544857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:19.925 [2024-12-05 09:42:07.546024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.925 [2024-12-05 09:42:07.546053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.925 [2024-12-05 09:42:07.546064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.925 [2024-12-05 09:42:07.546077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.925 [2024-12-05 09:42:07.546085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.925 [2024-12-05 09:42:07.546092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.925 [2024-12-05 09:42:07.546101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.925 [2024-12-05 09:42:07.546108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.925 [2024-12-05 09:42:07.546115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.925 [2024-12-05 09:42:07.546122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.925 [2024-12-05 09:42:07.546130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:19.925 [2024-12-05 09:42:07.546136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.925 09:42:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.186 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:20.187 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:20.448 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:20.448 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:20.448 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:20.448 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:20.448 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:20.448 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:20.448 09:42:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.448 09:42:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:20.710 09:42:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.710 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:33.021 09:42:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.021 09:42:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.021 09:42:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:33.021 09:42:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.021 09:42:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.021 09:42:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:33.021 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:33.021 [2024-12-05 09:42:20.445059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:33.021 [2024-12-05 09:42:20.446341] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.021 [2024-12-05 09:42:20.446439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.021 [2024-12-05 09:42:20.446502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.021 [2024-12-05 09:42:20.446575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.021 [2024-12-05 09:42:20.446594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.022 [2024-12-05 09:42:20.446657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.022 [2024-12-05 09:42:20.446686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.022 [2024-12-05 09:42:20.446704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.022 [2024-12-05 09:42:20.446752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.022 [2024-12-05 09:42:20.446779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.022 [2024-12-05 09:42:20.446850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.022 [2024-12-05 09:42:20.446878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.592 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:33.592 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:33.592 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:33.592 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:33.592 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:33.592 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:33.592 09:42:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.592 09:42:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.592 09:42:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.592 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:33.592 09:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:33.592 [2024-12-05 09:42:21.045058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:33.592 [2024-12-05 09:42:21.046302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.592 [2024-12-05 09:42:21.046401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.592 [2024-12-05 09:42:21.046467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.592 [2024-12-05 09:42:21.046542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.592 [2024-12-05 09:42:21.046563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.592 [2024-12-05 09:42:21.046616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.592 [2024-12-05 09:42:21.046644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.592 [2024-12-05 09:42:21.046661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.592 [2024-12-05 09:42:21.046685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.592 [2024-12-05 09:42:21.046740] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.592 [2024-12-05 09:42:21.046759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.592 [2024-12-05 09:42:21.046783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.160 09:42:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.160 09:42:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.160 09:42:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.160 09:42:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:46.411 09:42:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.411 09:42:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.411 09:42:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:46.411 09:42:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.411 09:42:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.411 [2024-12-05 09:42:33.845276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:46.411 [2024-12-05 09:42:33.846553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.411 [2024-12-05 09:42:33.846650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.411 [2024-12-05 09:42:33.846731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.411 [2024-12-05 09:42:33.847103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.411 [2024-12-05 09:42:33.847180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.411 [2024-12-05 09:42:33.847215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.411 [2024-12-05 09:42:33.847306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.411 [2024-12-05 09:42:33.847327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.411 [2024-12-05 09:42:33.847351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.411 [2024-12-05 09:42:33.847430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.411 [2024-12-05 09:42:33.847559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.411 [2024-12-05 09:42:33.847628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.411 09:42:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:46.411 09:42:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:46.672 [2024-12-05 09:42:34.245275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:46.672 [2024-12-05 09:42:34.246485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.672 [2024-12-05 09:42:34.246596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.672 [2024-12-05 09:42:34.246660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.672 [2024-12-05 09:42:34.246688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.672 [2024-12-05 09:42:34.246735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.672 [2024-12-05 09:42:34.246761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.672 [2024-12-05 09:42:34.246810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.672 [2024-12-05 09:42:34.246829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.672 [2024-12-05 09:42:34.246855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.672 [2024-12-05 09:42:34.246908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.672 [2024-12-05 09:42:34.246927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.672 [2024-12-05 09:42:34.246951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:46.934 09:42:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.934 09:42:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.934 09:42:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.934 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:47.196 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:47.196 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.196 09:42:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.72 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.72 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:10:59.425 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:59.425 09:42:46 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:59.425 09:42:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.013 09:42:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.013 09:42:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.013 09:42:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.013 09:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.013 [2024-12-05 09:42:52.797098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.013 [2024-12-05 09:42:52.798031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.013 [2024-12-05 09:42:52.798066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.013 [2024-12-05 09:42:52.798076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.013 [2024-12-05 09:42:52.798094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.013 [2024-12-05 09:42:52.798102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.013 [2024-12-05 09:42:52.798110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.013 [2024-12-05 09:42:52.798117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.013 [2024-12-05 09:42:52.798125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.013 [2024-12-05 09:42:52.798132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.013 [2024-12-05 09:42:52.798140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.013 [2024-12-05 09:42:52.798147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.013 [2024-12-05 09:42:52.798156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.013 [2024-12-05 09:42:53.197095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.013 [2024-12-05 09:42:53.197969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.013 [2024-12-05 09:42:53.198101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.013 [2024-12-05 09:42:53.198118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.013 [2024-12-05 09:42:53.198130] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.013 [2024-12-05 09:42:53.198138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.013 [2024-12-05 09:42:53.198145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.013 [2024-12-05 09:42:53.198153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.013 [2024-12-05 09:42:53.198159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.013 [2024-12-05 09:42:53.198170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.013 [2024-12-05 09:42:53.198177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.013 [2024-12-05 09:42:53.198186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.014 [2024-12-05 09:42:53.198192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.014 09:42:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.014 09:42:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.014 09:42:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.014 09:42:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.245 09:43:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.245 09:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.245 09:43:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.245 [2024-12-05 09:43:05.597316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:18.245 [2024-12-05 09:43:05.600112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.245 [2024-12-05 09:43:05.600228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.245 [2024-12-05 09:43:05.600293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.245 [2024-12-05 09:43:05.600356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.245 [2024-12-05 09:43:05.600375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.245 [2024-12-05 09:43:05.600427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.245 [2024-12-05 09:43:05.600481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.245 [2024-12-05 09:43:05.600502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.245 [2024-12-05 09:43:05.600568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.245 [2024-12-05 09:43:05.600596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.245 [2024-12-05 09:43:05.600725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.245 [2024-12-05 09:43:05.600754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.245 09:43:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.245 09:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.245 09:43:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:18.245 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:18.505 [2024-12-05 09:43:05.997312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:18.505 [2024-12-05 09:43:05.998254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.505 [2024-12-05 09:43:05.998283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.505 [2024-12-05 09:43:05.998295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.505 [2024-12-05 09:43:05.998307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.505 [2024-12-05 09:43:05.998317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.505 [2024-12-05 09:43:05.998324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.505 [2024-12-05 09:43:05.998333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.505 [2024-12-05 09:43:05.998340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.506 [2024-12-05 09:43:05.998348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.506 [2024-12-05 09:43:05.998355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.506 [2024-12-05 09:43:05.998363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.506 [2024-12-05 09:43:05.998369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.767 09:43:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.767 09:43:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.767 09:43:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.767 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.028 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.028 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.028 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.262 09:43:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.262 09:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.262 09:43:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.262 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.262 09:43:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.262 09:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.262 [2024-12-05 09:43:18.497548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:31.262 [2024-12-05 09:43:18.498713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.262 [2024-12-05 09:43:18.498805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.262 [2024-12-05 09:43:18.498917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.262 [2024-12-05 09:43:18.498979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.262 [2024-12-05 09:43:18.499014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.262 [2024-12-05 09:43:18.499039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.262 [2024-12-05 09:43:18.499063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.262 [2024-12-05 09:43:18.499083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.262 [2024-12-05 09:43:18.499180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.262 [2024-12-05 09:43:18.499208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.262 [2024-12-05 09:43:18.499225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.262 [2024-12-05 09:43:18.499334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.263 09:43:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.263 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:31.263 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.523 [2024-12-05 09:43:18.997549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:31.523 [2024-12-05 09:43:18.998472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-12-05 09:43:18.998577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.523 [2024-12-05 09:43:18.998646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.523 [2024-12-05 09:43:18.998720] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-12-05 09:43:18.998741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.523 [2024-12-05 09:43:18.998788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.523 [2024-12-05 09:43:18.998816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-12-05 09:43:18.998832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.523 [2024-12-05 09:43:18.998857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.523 [2024-12-05 09:43:18.998907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-12-05 09:43:18.998929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.523 [2024-12-05 09:43:18.998952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.523 09:43:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.523 09:43:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.523 09:43:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.523 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.784 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.784 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.784 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.784 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.784 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.784 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.784 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.784 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:11:44.044 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:44.044 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67198 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67198 ']' 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67198 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67198 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.044 killing process with pid 67198 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67198' 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67198 00:11:44.044 09:43:31 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67198 00:11:44.984 09:43:32 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:45.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:45.813 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:45.813 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:45.813 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:45.813 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.075 00:11:46.075 real 2m29.687s 00:11:46.075 user 1m51.923s 00:11:46.075 sys 0m16.476s 00:11:46.075 09:43:33 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.075 ************************************ 00:11:46.075 END TEST sw_hotplug 00:11:46.075 ************************************ 00:11:46.075 09:43:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.075 09:43:33 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:46.075 09:43:33 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:46.075 09:43:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:46.075 09:43:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.075 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:11:46.075 ************************************ 00:11:46.075 START TEST nvme_xnvme 00:11:46.075 ************************************ 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:46.075 * Looking for test storage... 00:11:46.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.075 09:43:33 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.075 --rc genhtml_branch_coverage=1 00:11:46.075 --rc genhtml_function_coverage=1 00:11:46.075 --rc genhtml_legend=1 00:11:46.075 --rc geninfo_all_blocks=1 00:11:46.075 --rc geninfo_unexecuted_blocks=1 00:11:46.075 00:11:46.075 ' 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.075 --rc genhtml_branch_coverage=1 00:11:46.075 --rc genhtml_function_coverage=1 00:11:46.075 --rc genhtml_legend=1 00:11:46.075 --rc geninfo_all_blocks=1 00:11:46.075 --rc geninfo_unexecuted_blocks=1 00:11:46.075 00:11:46.075 ' 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.075 --rc genhtml_branch_coverage=1 00:11:46.075 --rc genhtml_function_coverage=1 00:11:46.075 --rc genhtml_legend=1 00:11:46.075 --rc geninfo_all_blocks=1 00:11:46.075 --rc geninfo_unexecuted_blocks=1 00:11:46.075 00:11:46.075 ' 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.075 --rc genhtml_branch_coverage=1 00:11:46.075 --rc genhtml_function_coverage=1 00:11:46.075 --rc genhtml_legend=1 00:11:46.075 --rc geninfo_all_blocks=1 00:11:46.075 --rc geninfo_unexecuted_blocks=1 00:11:46.075 00:11:46.075 ' 00:11:46.075 09:43:33 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:46.075 09:43:33 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:46.075 09:43:33 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:46.075 09:43:33 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:46.076 09:43:33 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:46.076 09:43:33 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:46.076 #define SPDK_CONFIG_H 00:11:46.076 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:46.076 #define SPDK_CONFIG_APPS 1 00:11:46.076 #define SPDK_CONFIG_ARCH native 00:11:46.076 #define SPDK_CONFIG_ASAN 1 00:11:46.076 #undef SPDK_CONFIG_AVAHI 00:11:46.076 #undef SPDK_CONFIG_CET 00:11:46.076 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:46.076 #define SPDK_CONFIG_COVERAGE 1 00:11:46.076 #define SPDK_CONFIG_CROSS_PREFIX 00:11:46.076 #undef SPDK_CONFIG_CRYPTO 00:11:46.076 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:46.076 #undef SPDK_CONFIG_CUSTOMOCF 00:11:46.076 #undef SPDK_CONFIG_DAOS 00:11:46.076 #define SPDK_CONFIG_DAOS_DIR 00:11:46.076 #define SPDK_CONFIG_DEBUG 1 00:11:46.076 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:46.076 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:46.076 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:46.076 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:46.076 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:46.076 #undef SPDK_CONFIG_DPDK_UADK 00:11:46.076 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:46.076 #define SPDK_CONFIG_EXAMPLES 1 00:11:46.076 #undef SPDK_CONFIG_FC 00:11:46.076 #define SPDK_CONFIG_FC_PATH 00:11:46.076 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:46.076 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:46.076 #define SPDK_CONFIG_FSDEV 1 00:11:46.076 #undef SPDK_CONFIG_FUSE 00:11:46.076 #undef SPDK_CONFIG_FUZZER 00:11:46.076 #define SPDK_CONFIG_FUZZER_LIB 00:11:46.076 #undef SPDK_CONFIG_GOLANG 00:11:46.076 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:46.076 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:46.076 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:46.076 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:46.076 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:46.076 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:46.076 #undef SPDK_CONFIG_HAVE_LZ4 00:11:46.076 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:46.076 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:46.076 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:46.076 #define SPDK_CONFIG_IDXD 1 00:11:46.076 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:46.076 #undef SPDK_CONFIG_IPSEC_MB 00:11:46.076 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:46.076 #define SPDK_CONFIG_ISAL 1 00:11:46.076 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:46.076 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:46.076 #define SPDK_CONFIG_LIBDIR 00:11:46.076 #undef SPDK_CONFIG_LTO 00:11:46.076 #define SPDK_CONFIG_MAX_LCORES 128 00:11:46.076 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:46.076 #define SPDK_CONFIG_NVME_CUSE 1 00:11:46.076 #undef SPDK_CONFIG_OCF 00:11:46.076 #define SPDK_CONFIG_OCF_PATH 00:11:46.076 #define SPDK_CONFIG_OPENSSL_PATH 00:11:46.076 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:46.076 #define SPDK_CONFIG_PGO_DIR 00:11:46.076 #undef SPDK_CONFIG_PGO_USE 00:11:46.076 #define SPDK_CONFIG_PREFIX /usr/local 00:11:46.076 #undef SPDK_CONFIG_RAID5F 00:11:46.076 #undef SPDK_CONFIG_RBD 00:11:46.076 #define SPDK_CONFIG_RDMA 1 00:11:46.076 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:46.076 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:46.076 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:46.076 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:46.076 #define SPDK_CONFIG_SHARED 1 00:11:46.076 #undef SPDK_CONFIG_SMA 00:11:46.076 #define SPDK_CONFIG_TESTS 1 00:11:46.076 #undef SPDK_CONFIG_TSAN 00:11:46.076 #define SPDK_CONFIG_UBLK 1 00:11:46.076 #define SPDK_CONFIG_UBSAN 1 00:11:46.076 #undef SPDK_CONFIG_UNIT_TESTS 00:11:46.076 #undef SPDK_CONFIG_URING 00:11:46.076 #define SPDK_CONFIG_URING_PATH 00:11:46.076 #undef SPDK_CONFIG_URING_ZNS 00:11:46.076 #undef SPDK_CONFIG_USDT 00:11:46.076 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:46.076 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:46.076 #undef SPDK_CONFIG_VFIO_USER 00:11:46.076 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:46.076 #define SPDK_CONFIG_VHOST 1 00:11:46.076 #define SPDK_CONFIG_VIRTIO 1 00:11:46.076 #undef SPDK_CONFIG_VTUNE 00:11:46.076 #define SPDK_CONFIG_VTUNE_DIR 00:11:46.076 #define SPDK_CONFIG_WERROR 1 00:11:46.076 #define SPDK_CONFIG_WPDK_DIR 00:11:46.076 #define SPDK_CONFIG_XNVME 1 00:11:46.076 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:46.076 09:43:33 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:46.076 09:43:33 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.076 09:43:33 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.076 09:43:33 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.076 09:43:33 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.076 09:43:33 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.077 09:43:33 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.077 09:43:33 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.077 09:43:33 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.077 09:43:33 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:46.077 09:43:33 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.077 09:43:33 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:46.077 09:43:33 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:46.077 09:43:33 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:46.077 09:43:33 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:46.339 09:43:33 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:46.339 09:43:33 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:46.340 09:43:33 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68564 ]] 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68564 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.qPl7A7 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.qPl7A7/tests/xnvme /tmp/spdk.qPl7A7 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13953142784 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5614907392 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13953142784 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5614907392 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98835124224 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:46.341 09:43:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=867655680 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:46.342 * Looking for test storage... 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13953142784 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:46.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.342 --rc genhtml_branch_coverage=1 00:11:46.342 --rc genhtml_function_coverage=1 00:11:46.342 --rc genhtml_legend=1 00:11:46.342 --rc geninfo_all_blocks=1 00:11:46.342 --rc geninfo_unexecuted_blocks=1 00:11:46.342 00:11:46.342 ' 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.342 --rc genhtml_branch_coverage=1 00:11:46.342 --rc genhtml_function_coverage=1 00:11:46.342 --rc genhtml_legend=1 00:11:46.342 --rc geninfo_all_blocks=1 00:11:46.342 --rc geninfo_unexecuted_blocks=1 00:11:46.342 00:11:46.342 ' 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.342 --rc genhtml_branch_coverage=1 00:11:46.342 --rc genhtml_function_coverage=1 00:11:46.342 --rc genhtml_legend=1 00:11:46.342 --rc geninfo_all_blocks=1 00:11:46.342 --rc geninfo_unexecuted_blocks=1 00:11:46.342 00:11:46.342 ' 00:11:46.342 09:43:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.342 --rc genhtml_branch_coverage=1 00:11:46.342 --rc genhtml_function_coverage=1 00:11:46.342 --rc genhtml_legend=1 00:11:46.342 --rc geninfo_all_blocks=1 00:11:46.342 --rc geninfo_unexecuted_blocks=1 00:11:46.342 00:11:46.342 ' 00:11:46.342 09:43:33 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.342 09:43:33 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.342 09:43:33 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.342 09:43:33 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.342 09:43:33 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.342 09:43:33 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:46.342 09:43:33 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:46.342 09:43:33 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:46.343 09:43:33 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:46.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.864 Waiting for block devices as requested 00:11:46.864 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:46.864 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.126 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.126 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.434 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:52.434 09:43:39 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:52.696 09:43:40 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:52.696 09:43:40 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:52.696 09:43:40 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:52.696 09:43:40 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:52.696 09:43:40 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:52.696 09:43:40 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:52.696 09:43:40 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:52.960 No valid GPT data, bailing 00:11:52.960 09:43:40 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:52.960 09:43:40 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:52.960 09:43:40 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:52.960 09:43:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:52.960 09:43:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.960 09:43:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.960 09:43:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:52.960 ************************************ 00:11:52.960 START TEST xnvme_rpc 00:11:52.960 ************************************ 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68946 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68946 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68946 ']' 00:11:52.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.960 09:43:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:52.961 09:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.961 09:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.961 09:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.961 09:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.961 09:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.961 [2024-12-05 09:43:40.464488] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:52.961 [2024-12-05 09:43:40.464855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68946 ] 00:11:53.221 [2024-12-05 09:43:40.628168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.221 [2024-12-05 09:43:40.752622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.161 xnvme_bdev 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68946 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68946 ']' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68946 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68946 00:11:54.161 killing process with pid 68946 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68946' 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68946 00:11:54.161 09:43:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68946 00:11:56.072 ************************************ 00:11:56.072 END TEST xnvme_rpc 00:11:56.072 ************************************ 00:11:56.072 00:11:56.072 real 0m2.869s 00:11:56.072 user 0m2.861s 00:11:56.072 sys 0m0.474s 00:11:56.072 09:43:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.072 09:43:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.072 09:43:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:56.072 09:43:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.072 09:43:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.072 09:43:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.072 ************************************ 00:11:56.072 START TEST xnvme_bdevperf 00:11:56.072 ************************************ 00:11:56.072 09:43:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:56.072 09:43:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:11:56.072 09:43:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:11:56.072 09:43:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:56.072 09:43:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:11:56.072 09:43:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:56.072 09:43:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:56.072 09:43:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:56.072 { 00:11:56.072 "subsystems": [ 00:11:56.072 { 00:11:56.072 "subsystem": "bdev", 00:11:56.072 "config": [ 00:11:56.072 { 00:11:56.072 "params": { 00:11:56.072 "io_mechanism": "libaio", 00:11:56.072 "conserve_cpu": false, 00:11:56.072 "filename": "/dev/nvme0n1", 00:11:56.072 "name": "xnvme_bdev" 00:11:56.072 }, 00:11:56.072 "method": "bdev_xnvme_create" 00:11:56.072 }, 00:11:56.072 { 00:11:56.072 "method": "bdev_wait_for_examine" 00:11:56.072 } 00:11:56.072 ] 00:11:56.072 } 00:11:56.072 ] 00:11:56.072 } 00:11:56.072 [2024-12-05 09:43:43.361685] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:56.073 [2024-12-05 09:43:43.361941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69020 ] 00:11:56.073 [2024-12-05 09:43:43.518132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.073 [2024-12-05 09:43:43.602051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.333 Running I/O for 5 seconds... 00:11:58.218 30755.00 IOPS, 120.14 MiB/s [2024-12-05T09:43:47.233Z] 30457.00 IOPS, 118.97 MiB/s [2024-12-05T09:43:48.174Z] 28869.00 IOPS, 112.77 MiB/s [2024-12-05T09:43:49.117Z] 28626.00 IOPS, 111.82 MiB/s 00:12:01.488 Latency(us) 00:12:01.488 [2024-12-05T09:43:49.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.488 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:01.488 xnvme_bdev : 5.00 28566.23 111.59 0.00 0.00 2235.52 198.50 10132.87 00:12:01.488 [2024-12-05T09:43:49.117Z] =================================================================================================================== 00:12:01.488 [2024-12-05T09:43:49.117Z] Total : 28566.23 111.59 0.00 0.00 2235.52 198.50 10132.87 00:12:02.061 09:43:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:02.061 09:43:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:02.061 09:43:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:02.061 09:43:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:02.061 09:43:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:02.061 { 00:12:02.061 "subsystems": [ 00:12:02.061 { 00:12:02.061 "subsystem": "bdev", 00:12:02.061 "config": [ 00:12:02.061 { 00:12:02.061 "params": { 00:12:02.061 "io_mechanism": "libaio", 00:12:02.061 "conserve_cpu": false, 00:12:02.061 "filename": "/dev/nvme0n1", 00:12:02.061 "name": "xnvme_bdev" 00:12:02.061 }, 00:12:02.061 "method": "bdev_xnvme_create" 00:12:02.061 }, 00:12:02.061 { 00:12:02.061 "method": "bdev_wait_for_examine" 00:12:02.061 } 00:12:02.061 ] 00:12:02.061 } 00:12:02.061 ] 00:12:02.061 } 00:12:02.322 [2024-12-05 09:43:49.703369] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:02.322 [2024-12-05 09:43:49.703538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69096 ] 00:12:02.322 [2024-12-05 09:43:49.861765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.584 [2024-12-05 09:43:49.983886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.845 Running I/O for 5 seconds... 00:12:04.743 37248.00 IOPS, 145.50 MiB/s [2024-12-05T09:43:53.317Z] 36701.00 IOPS, 143.36 MiB/s [2024-12-05T09:43:54.705Z] 35480.33 IOPS, 138.60 MiB/s [2024-12-05T09:43:55.649Z] 35475.00 IOPS, 138.57 MiB/s [2024-12-05T09:43:55.649Z] 35168.00 IOPS, 137.38 MiB/s 00:12:08.020 Latency(us) 00:12:08.020 [2024-12-05T09:43:55.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.020 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:08.020 xnvme_bdev : 5.01 35123.26 137.20 0.00 0.00 1816.45 354.46 6503.19 00:12:08.020 [2024-12-05T09:43:55.649Z] =================================================================================================================== 00:12:08.020 [2024-12-05T09:43:55.649Z] Total : 35123.26 137.20 0.00 0.00 1816.45 354.46 6503.19 00:12:08.590 00:12:08.590 real 0m12.828s 00:12:08.590 user 0m5.046s 00:12:08.590 sys 0m6.245s 00:12:08.590 09:43:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.590 09:43:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:08.590 ************************************ 00:12:08.590 END TEST xnvme_bdevperf 00:12:08.590 ************************************ 00:12:08.590 09:43:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:08.590 09:43:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.590 09:43:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.590 09:43:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:08.590 ************************************ 00:12:08.590 START TEST xnvme_fio_plugin 00:12:08.590 ************************************ 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:08.590 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:08.849 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:08.849 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:08.849 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:08.849 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:08.849 09:43:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.849 { 00:12:08.849 "subsystems": [ 00:12:08.849 { 00:12:08.849 "subsystem": "bdev", 00:12:08.849 "config": [ 00:12:08.849 { 00:12:08.849 "params": { 00:12:08.849 "io_mechanism": "libaio", 00:12:08.849 "conserve_cpu": false, 00:12:08.849 "filename": "/dev/nvme0n1", 00:12:08.849 "name": "xnvme_bdev" 00:12:08.849 }, 00:12:08.849 "method": "bdev_xnvme_create" 00:12:08.849 }, 00:12:08.849 { 00:12:08.849 "method": "bdev_wait_for_examine" 00:12:08.849 } 00:12:08.849 ] 00:12:08.849 } 00:12:08.849 ] 00:12:08.849 } 00:12:08.849 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:08.849 fio-3.35 00:12:08.849 Starting 1 thread 00:12:15.424 00:12:15.424 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69210: Thu Dec 5 09:44:02 2024 00:12:15.424 read: IOPS=34.4k, BW=134MiB/s (141MB/s)(672MiB/5001msec) 00:12:15.424 slat (usec): min=4, max=1793, avg=21.16, stdev=88.51 00:12:15.424 clat (usec): min=104, max=7054, avg=1282.40, stdev=531.62 00:12:15.424 lat (usec): min=171, max=7108, avg=1303.56, stdev=524.26 00:12:15.424 clat percentiles (usec): 00:12:15.424 | 1.00th=[ 258], 5.00th=[ 506], 10.00th=[ 644], 20.00th=[ 848], 00:12:15.424 | 30.00th=[ 988], 40.00th=[ 1123], 50.00th=[ 1254], 60.00th=[ 1385], 00:12:15.425 | 70.00th=[ 1516], 80.00th=[ 1680], 90.00th=[ 1909], 95.00th=[ 2147], 00:12:15.425 | 99.00th=[ 2835], 99.50th=[ 3195], 99.90th=[ 3949], 99.95th=[ 4293], 00:12:15.425 | 99.99th=[ 6783] 00:12:15.425 bw ( KiB/s): min=127416, max=151912, per=100.00%, avg=137881.00, stdev=7184.06, samples=9 00:12:15.425 iops : min=31854, max=37978, avg=34470.22, stdev=1796.03, samples=9 00:12:15.425 lat (usec) : 250=0.90%, 500=3.94%, 750=10.16%, 1000=16.00% 00:12:15.425 lat (msec) : 2=61.12%, 4=7.79%, 10=0.09% 00:12:15.425 cpu : usr=41.14%, sys=50.76%, ctx=15, majf=0, minf=764 00:12:15.425 IO depths : 1=0.4%, 2=1.1%, 4=3.1%, 8=8.3%, 16=23.2%, 32=61.8%, >=64=2.1% 00:12:15.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.425 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:15.425 issued rwts: total=172004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:15.425 00:12:15.425 Run status group 0 (all jobs): 00:12:15.425 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=672MiB (705MB), run=5001-5001msec 00:12:15.425 ----------------------------------------------------- 00:12:15.425 Suppressions used: 00:12:15.425 count bytes template 00:12:15.425 1 11 /usr/src/fio/parse.c 00:12:15.425 1 8 libtcmalloc_minimal.so 00:12:15.425 1 904 libcrypto.so 00:12:15.425 ----------------------------------------------------- 00:12:15.425 00:12:15.425 09:44:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:15.686 09:44:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:15.686 { 00:12:15.686 "subsystems": [ 00:12:15.686 { 00:12:15.686 "subsystem": "bdev", 00:12:15.686 "config": [ 00:12:15.686 { 00:12:15.686 "params": { 00:12:15.686 "io_mechanism": "libaio", 00:12:15.686 "conserve_cpu": false, 00:12:15.686 "filename": "/dev/nvme0n1", 00:12:15.686 "name": "xnvme_bdev" 00:12:15.686 }, 00:12:15.686 "method": "bdev_xnvme_create" 00:12:15.686 }, 00:12:15.686 { 00:12:15.686 "method": "bdev_wait_for_examine" 00:12:15.686 } 00:12:15.686 ] 00:12:15.686 } 00:12:15.686 ] 00:12:15.686 } 00:12:15.686 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:15.686 fio-3.35 00:12:15.686 Starting 1 thread 00:12:22.260 00:12:22.260 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69307: Thu Dec 5 09:44:08 2024 00:12:22.260 write: IOPS=37.3k, BW=146MiB/s (153MB/s)(728MiB/5001msec); 0 zone resets 00:12:22.260 slat (usec): min=4, max=1694, avg=20.88, stdev=80.42 00:12:22.260 clat (usec): min=89, max=5602, avg=1154.27, stdev=511.10 00:12:22.260 lat (usec): min=162, max=5614, avg=1175.15, stdev=505.53 00:12:22.260 clat percentiles (usec): 00:12:22.260 | 1.00th=[ 243], 5.00th=[ 416], 10.00th=[ 562], 20.00th=[ 734], 00:12:22.260 | 30.00th=[ 865], 40.00th=[ 979], 50.00th=[ 1090], 60.00th=[ 1221], 00:12:22.260 | 70.00th=[ 1369], 80.00th=[ 1549], 90.00th=[ 1795], 95.00th=[ 2057], 00:12:22.260 | 99.00th=[ 2671], 99.50th=[ 2966], 99.90th=[ 3621], 99.95th=[ 4424], 00:12:22.260 | 99.99th=[ 4948] 00:12:22.260 bw ( KiB/s): min=139848, max=164936, per=99.18%, avg=147843.56, stdev=8286.03, samples=9 00:12:22.260 iops : min=34962, max=41234, avg=36960.89, stdev=2071.51, samples=9 00:12:22.260 lat (usec) : 100=0.01%, 250=1.11%, 500=6.47%, 750=13.81%, 1000=20.44% 00:12:22.260 lat (msec) : 2=52.38%, 4=5.72%, 10=0.06% 00:12:22.260 cpu : usr=36.02%, sys=53.96%, ctx=11, majf=0, minf=765 00:12:22.260 IO depths : 1=0.3%, 2=0.9%, 4=2.7%, 8=8.0%, 16=23.1%, 32=62.8%, >=64=2.1% 00:12:22.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.260 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:22.260 issued rwts: total=0,186370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:22.260 00:12:22.260 Run status group 0 (all jobs): 00:12:22.260 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=728MiB (763MB), run=5001-5001msec 00:12:22.260 ----------------------------------------------------- 00:12:22.260 Suppressions used: 00:12:22.260 count bytes template 00:12:22.260 1 11 /usr/src/fio/parse.c 00:12:22.260 1 8 libtcmalloc_minimal.so 00:12:22.260 1 904 libcrypto.so 00:12:22.260 ----------------------------------------------------- 00:12:22.260 00:12:22.260 00:12:22.260 real 0m13.656s 00:12:22.260 user 0m6.557s 00:12:22.260 sys 0m5.807s 00:12:22.260 09:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.260 ************************************ 00:12:22.260 END TEST xnvme_fio_plugin 00:12:22.260 ************************************ 00:12:22.260 09:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:22.565 09:44:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:22.565 09:44:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:22.565 09:44:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:22.565 09:44:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:22.565 09:44:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.565 09:44:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.565 09:44:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.565 ************************************ 00:12:22.565 START TEST xnvme_rpc 00:12:22.565 ************************************ 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69388 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69388 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69388 ']' 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.565 09:44:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.565 [2024-12-05 09:44:09.984896] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:22.565 [2024-12-05 09:44:09.985283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69388 ] 00:12:22.565 [2024-12-05 09:44:10.139662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.854 [2024-12-05 09:44:10.241496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 xnvme_bdev 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 09:44:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69388 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69388 ']' 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69388 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69388 00:12:23.424 killing process with pid 69388 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69388' 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69388 00:12:23.424 09:44:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69388 00:12:25.339 ************************************ 00:12:25.339 END TEST xnvme_rpc 00:12:25.339 ************************************ 00:12:25.339 00:12:25.339 real 0m2.717s 00:12:25.339 user 0m2.755s 00:12:25.339 sys 0m0.399s 00:12:25.339 09:44:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.339 09:44:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.339 09:44:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:25.339 09:44:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.339 09:44:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.339 09:44:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.339 ************************************ 00:12:25.339 START TEST xnvme_bdevperf 00:12:25.339 ************************************ 00:12:25.339 09:44:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:25.339 09:44:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:25.339 09:44:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:25.339 09:44:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:25.339 09:44:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:25.339 09:44:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:25.339 09:44:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:25.339 09:44:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:25.339 { 00:12:25.339 "subsystems": [ 00:12:25.339 { 00:12:25.339 "subsystem": "bdev", 00:12:25.339 "config": [ 00:12:25.339 { 00:12:25.339 "params": { 00:12:25.339 "io_mechanism": "libaio", 00:12:25.339 "conserve_cpu": true, 00:12:25.339 "filename": "/dev/nvme0n1", 00:12:25.339 "name": "xnvme_bdev" 00:12:25.339 }, 00:12:25.339 "method": "bdev_xnvme_create" 00:12:25.339 }, 00:12:25.339 { 00:12:25.339 "method": "bdev_wait_for_examine" 00:12:25.339 } 00:12:25.339 ] 00:12:25.339 } 00:12:25.339 ] 00:12:25.339 } 00:12:25.339 [2024-12-05 09:44:12.773542] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:25.339 [2024-12-05 09:44:12.773680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69462 ] 00:12:25.339 [2024-12-05 09:44:12.933419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.599 [2024-12-05 09:44:13.046829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.859 Running I/O for 5 seconds... 00:12:27.743 35115.00 IOPS, 137.17 MiB/s [2024-12-05T09:44:16.756Z] 33482.50 IOPS, 130.79 MiB/s [2024-12-05T09:44:17.324Z] 33823.00 IOPS, 132.12 MiB/s [2024-12-05T09:44:18.708Z] 33850.75 IOPS, 132.23 MiB/s 00:12:31.079 Latency(us) 00:12:31.079 [2024-12-05T09:44:18.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.079 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:31.079 xnvme_bdev : 5.00 33991.97 132.78 0.00 0.00 1878.24 207.95 10334.52 00:12:31.079 [2024-12-05T09:44:18.708Z] =================================================================================================================== 00:12:31.079 [2024-12-05T09:44:18.708Z] Total : 33991.97 132.78 0.00 0.00 1878.24 207.95 10334.52 00:12:31.646 09:44:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:31.646 09:44:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:31.646 09:44:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:31.646 09:44:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.646 09:44:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.646 { 00:12:31.646 "subsystems": [ 00:12:31.646 { 00:12:31.646 "subsystem": "bdev", 00:12:31.646 "config": [ 00:12:31.646 { 00:12:31.646 "params": { 00:12:31.646 "io_mechanism": "libaio", 00:12:31.646 "conserve_cpu": true, 00:12:31.646 "filename": "/dev/nvme0n1", 00:12:31.646 "name": "xnvme_bdev" 00:12:31.646 }, 00:12:31.646 "method": "bdev_xnvme_create" 00:12:31.646 }, 00:12:31.646 { 00:12:31.646 "method": "bdev_wait_for_examine" 00:12:31.646 } 00:12:31.646 ] 00:12:31.646 } 00:12:31.646 ] 00:12:31.646 } 00:12:31.646 [2024-12-05 09:44:19.114855] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:31.646 [2024-12-05 09:44:19.114974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:12:31.646 [2024-12-05 09:44:19.272477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.906 [2024-12-05 09:44:19.367093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.167 Running I/O for 5 seconds... 00:12:34.054 36493.00 IOPS, 142.55 MiB/s [2024-12-05T09:44:23.071Z] 36921.50 IOPS, 144.22 MiB/s [2024-12-05T09:44:23.644Z] 36964.33 IOPS, 144.39 MiB/s [2024-12-05T09:44:25.033Z] 37140.25 IOPS, 145.08 MiB/s [2024-12-05T09:44:25.033Z] 36828.80 IOPS, 143.86 MiB/s 00:12:37.404 Latency(us) 00:12:37.404 [2024-12-05T09:44:25.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.404 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:37.404 xnvme_bdev : 5.00 36815.78 143.81 0.00 0.00 1733.90 356.04 8418.86 00:12:37.404 [2024-12-05T09:44:25.033Z] =================================================================================================================== 00:12:37.404 [2024-12-05T09:44:25.033Z] Total : 36815.78 143.81 0.00 0.00 1733.90 356.04 8418.86 00:12:37.988 00:12:37.988 real 0m12.727s 00:12:37.988 user 0m5.052s 00:12:37.988 sys 0m6.058s 00:12:37.988 09:44:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.988 09:44:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 ************************************ 00:12:37.988 END TEST xnvme_bdevperf 00:12:37.988 ************************************ 00:12:37.988 09:44:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:37.988 09:44:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:37.988 09:44:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.988 09:44:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 ************************************ 00:12:37.988 START TEST xnvme_fio_plugin 00:12:37.988 ************************************ 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:37.988 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:37.989 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:37.989 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:37.989 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:37.989 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:37.989 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:37.989 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:37.989 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:37.989 09:44:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:37.989 { 00:12:37.989 "subsystems": [ 00:12:37.989 { 00:12:37.989 "subsystem": "bdev", 00:12:37.989 "config": [ 00:12:37.989 { 00:12:37.989 "params": { 00:12:37.989 "io_mechanism": "libaio", 00:12:37.989 "conserve_cpu": true, 00:12:37.989 "filename": "/dev/nvme0n1", 00:12:37.989 "name": "xnvme_bdev" 00:12:37.989 }, 00:12:37.989 "method": "bdev_xnvme_create" 00:12:37.989 }, 00:12:37.989 { 00:12:37.989 "method": "bdev_wait_for_examine" 00:12:37.989 } 00:12:37.989 ] 00:12:37.989 } 00:12:37.989 ] 00:12:37.989 } 00:12:38.247 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:38.247 fio-3.35 00:12:38.247 Starting 1 thread 00:12:44.829 00:12:44.829 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69651: Thu Dec 5 09:44:31 2024 00:12:44.829 read: IOPS=34.7k, BW=136MiB/s (142MB/s)(678MiB/5001msec) 00:12:44.829 slat (usec): min=4, max=1966, avg=18.14, stdev=86.16 00:12:44.829 clat (usec): min=96, max=8259, avg=1346.16, stdev=511.88 00:12:44.829 lat (usec): min=164, max=8293, avg=1364.30, stdev=503.28 00:12:44.829 clat percentiles (usec): 00:12:44.829 | 1.00th=[ 277], 5.00th=[ 553], 10.00th=[ 709], 20.00th=[ 922], 00:12:44.829 | 30.00th=[ 1090], 40.00th=[ 1221], 50.00th=[ 1352], 60.00th=[ 1467], 00:12:44.829 | 70.00th=[ 1598], 80.00th=[ 1729], 90.00th=[ 1926], 95.00th=[ 2114], 00:12:44.829 | 99.00th=[ 2671], 99.50th=[ 3032], 99.90th=[ 3949], 99.95th=[ 4424], 00:12:44.829 | 99.99th=[ 8029] 00:12:44.829 bw ( KiB/s): min=133032, max=147824, per=100.00%, avg=139468.44, stdev=5061.12, samples=9 00:12:44.829 iops : min=33258, max=36956, avg=34867.11, stdev=1265.28, samples=9 00:12:44.829 lat (usec) : 100=0.01%, 250=0.68%, 500=3.18%, 750=7.74%, 1000=12.87% 00:12:44.829 lat (msec) : 2=67.91%, 4=7.52%, 10=0.09% 00:12:44.829 cpu : usr=50.56%, sys=42.66%, ctx=13, majf=0, minf=764 00:12:44.829 IO depths : 1=0.7%, 2=1.5%, 4=3.5%, 8=8.8%, 16=23.0%, 32=60.4%, >=64=2.0% 00:12:44.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.829 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:44.829 issued rwts: total=173630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.829 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:44.829 00:12:44.829 Run status group 0 (all jobs): 00:12:44.829 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=678MiB (711MB), run=5001-5001msec 00:12:44.829 ----------------------------------------------------- 00:12:44.829 Suppressions used: 00:12:44.829 count bytes template 00:12:44.829 1 11 /usr/src/fio/parse.c 00:12:44.829 1 8 libtcmalloc_minimal.so 00:12:44.829 1 904 libcrypto.so 00:12:44.829 ----------------------------------------------------- 00:12:44.829 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:44.829 09:44:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.829 { 00:12:44.829 "subsystems": [ 00:12:44.829 { 00:12:44.829 "subsystem": "bdev", 00:12:44.829 "config": [ 00:12:44.829 { 00:12:44.829 "params": { 00:12:44.829 "io_mechanism": "libaio", 00:12:44.829 "conserve_cpu": true, 00:12:44.829 "filename": "/dev/nvme0n1", 00:12:44.829 "name": "xnvme_bdev" 00:12:44.829 }, 00:12:44.829 "method": "bdev_xnvme_create" 00:12:44.829 }, 00:12:44.829 { 00:12:44.829 "method": "bdev_wait_for_examine" 00:12:44.829 } 00:12:44.829 ] 00:12:44.829 } 00:12:44.829 ] 00:12:44.829 } 00:12:45.089 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:45.089 fio-3.35 00:12:45.089 Starting 1 thread 00:12:51.670 00:12:51.670 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69748: Thu Dec 5 09:44:38 2024 00:12:51.670 write: IOPS=34.9k, BW=137MiB/s (143MB/s)(683MiB/5001msec); 0 zone resets 00:12:51.670 slat (usec): min=4, max=4709, avg=19.09, stdev=84.15 00:12:51.670 clat (usec): min=107, max=5877, avg=1305.98, stdev=518.50 00:12:51.670 lat (usec): min=181, max=5893, avg=1325.07, stdev=511.20 00:12:51.670 clat percentiles (usec): 00:12:51.670 | 1.00th=[ 277], 5.00th=[ 510], 10.00th=[ 660], 20.00th=[ 865], 00:12:51.670 | 30.00th=[ 1012], 40.00th=[ 1156], 50.00th=[ 1287], 60.00th=[ 1418], 00:12:51.670 | 70.00th=[ 1549], 80.00th=[ 1713], 90.00th=[ 1926], 95.00th=[ 2147], 00:12:51.670 | 99.00th=[ 2737], 99.50th=[ 3097], 99.90th=[ 3818], 99.95th=[ 4015], 00:12:51.670 | 99.99th=[ 5145] 00:12:51.670 bw ( KiB/s): min=133256, max=157760, per=100.00%, avg=140042.67, stdev=7031.95, samples=9 00:12:51.670 iops : min=33314, max=39440, avg=35010.67, stdev=1757.99, samples=9 00:12:51.670 lat (usec) : 250=0.72%, 500=3.99%, 750=9.29%, 1000=15.04% 00:12:51.670 lat (msec) : 2=63.06%, 4=7.86%, 10=0.05% 00:12:51.670 cpu : usr=48.26%, sys=43.80%, ctx=14, majf=0, minf=765 00:12:51.670 IO depths : 1=0.6%, 2=1.4%, 4=3.4%, 8=8.8%, 16=23.2%, 32=60.6%, >=64=2.1% 00:12:51.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.670 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:51.670 issued rwts: total=0,174759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.670 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:51.670 00:12:51.670 Run status group 0 (all jobs): 00:12:51.670 WRITE: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=683MiB (716MB), run=5001-5001msec 00:12:51.671 ----------------------------------------------------- 00:12:51.671 Suppressions used: 00:12:51.671 count bytes template 00:12:51.671 1 11 /usr/src/fio/parse.c 00:12:51.671 1 8 libtcmalloc_minimal.so 00:12:51.671 1 904 libcrypto.so 00:12:51.671 ----------------------------------------------------- 00:12:51.671 00:12:51.671 ************************************ 00:12:51.671 END TEST xnvme_fio_plugin 00:12:51.671 ************************************ 00:12:51.671 00:12:51.671 real 0m13.688s 00:12:51.671 user 0m7.657s 00:12:51.671 sys 0m4.900s 00:12:51.671 09:44:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.671 09:44:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:51.671 09:44:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:51.671 09:44:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:51.671 09:44:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.671 09:44:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.671 ************************************ 00:12:51.671 START TEST xnvme_rpc 00:12:51.671 ************************************ 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:51.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69832 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69832 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69832 ']' 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.671 09:44:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.931 [2024-12-05 09:44:39.305302] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:51.931 [2024-12-05 09:44:39.305422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69832 ] 00:12:51.931 [2024-12-05 09:44:39.465983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.191 [2024-12-05 09:44:39.562475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.763 xnvme_bdev 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69832 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69832 ']' 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69832 00:12:52.763 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:52.764 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.764 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69832 00:12:52.764 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.764 killing process with pid 69832 00:12:52.764 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.764 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69832' 00:12:52.764 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69832 00:12:52.764 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69832 00:12:54.670 ************************************ 00:12:54.670 END TEST xnvme_rpc 00:12:54.670 ************************************ 00:12:54.670 00:12:54.670 real 0m2.680s 00:12:54.670 user 0m2.777s 00:12:54.670 sys 0m0.353s 00:12:54.670 09:44:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.670 09:44:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.670 09:44:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:54.670 09:44:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:54.670 09:44:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.670 09:44:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.670 ************************************ 00:12:54.670 START TEST xnvme_bdevperf 00:12:54.670 ************************************ 00:12:54.670 09:44:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:54.670 09:44:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:54.670 09:44:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:54.670 09:44:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:54.670 09:44:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:54.670 09:44:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:54.671 09:44:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:54.671 09:44:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:54.671 { 00:12:54.671 "subsystems": [ 00:12:54.671 { 00:12:54.671 "subsystem": "bdev", 00:12:54.671 "config": [ 00:12:54.671 { 00:12:54.671 "params": { 00:12:54.671 "io_mechanism": "io_uring", 00:12:54.671 "conserve_cpu": false, 00:12:54.671 "filename": "/dev/nvme0n1", 00:12:54.671 "name": "xnvme_bdev" 00:12:54.671 }, 00:12:54.671 "method": "bdev_xnvme_create" 00:12:54.671 }, 00:12:54.671 { 00:12:54.671 "method": "bdev_wait_for_examine" 00:12:54.671 } 00:12:54.671 ] 00:12:54.671 } 00:12:54.671 ] 00:12:54.671 } 00:12:54.671 [2024-12-05 09:44:42.041040] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:54.671 [2024-12-05 09:44:42.041188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69900 ] 00:12:54.671 [2024-12-05 09:44:42.196502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.671 [2024-12-05 09:44:42.293086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.951 Running I/O for 5 seconds... 00:12:57.320 35516.00 IOPS, 138.73 MiB/s [2024-12-05T09:44:45.897Z] 35286.00 IOPS, 137.84 MiB/s [2024-12-05T09:44:46.836Z] 35200.67 IOPS, 137.50 MiB/s [2024-12-05T09:44:47.773Z] 35143.75 IOPS, 137.28 MiB/s [2024-12-05T09:44:47.773Z] 35120.40 IOPS, 137.19 MiB/s 00:13:00.144 Latency(us) 00:13:00.144 [2024-12-05T09:44:47.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.144 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:00.144 xnvme_bdev : 5.00 35107.07 137.14 0.00 0.00 1819.53 363.91 10435.35 00:13:00.144 [2024-12-05T09:44:47.773Z] =================================================================================================================== 00:13:00.144 [2024-12-05T09:44:47.773Z] Total : 35107.07 137.14 0.00 0.00 1819.53 363.91 10435.35 00:13:00.714 09:44:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:00.714 09:44:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:00.714 09:44:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:00.714 09:44:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:00.714 09:44:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:00.714 { 00:13:00.714 "subsystems": [ 00:13:00.714 { 00:13:00.714 "subsystem": "bdev", 00:13:00.714 "config": [ 00:13:00.714 { 00:13:00.714 "params": { 00:13:00.714 "io_mechanism": "io_uring", 00:13:00.714 "conserve_cpu": false, 00:13:00.714 "filename": "/dev/nvme0n1", 00:13:00.714 "name": "xnvme_bdev" 00:13:00.714 }, 00:13:00.714 "method": "bdev_xnvme_create" 00:13:00.714 }, 00:13:00.714 { 00:13:00.714 "method": "bdev_wait_for_examine" 00:13:00.714 } 00:13:00.714 ] 00:13:00.714 } 00:13:00.714 ] 00:13:00.714 } 00:13:00.714 [2024-12-05 09:44:48.332071] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:00.714 [2024-12-05 09:44:48.332185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69976 ] 00:13:00.974 [2024-12-05 09:44:48.494464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.974 [2024-12-05 09:44:48.591297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.234 Running I/O for 5 seconds... 00:13:03.555 36652.00 IOPS, 143.17 MiB/s [2024-12-05T09:44:52.127Z] 36918.50 IOPS, 144.21 MiB/s [2024-12-05T09:44:53.069Z] 36037.67 IOPS, 140.77 MiB/s [2024-12-05T09:44:54.009Z] 35228.50 IOPS, 137.61 MiB/s 00:13:06.380 Latency(us) 00:13:06.380 [2024-12-05T09:44:54.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.380 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:06.380 xnvme_bdev : 5.00 35233.49 137.63 0.00 0.00 1812.81 368.64 7511.43 00:13:06.380 [2024-12-05T09:44:54.009Z] =================================================================================================================== 00:13:06.380 [2024-12-05T09:44:54.009Z] Total : 35233.49 137.63 0.00 0.00 1812.81 368.64 7511.43 00:13:06.947 00:13:06.947 real 0m12.576s 00:13:06.947 user 0m6.063s 00:13:06.947 sys 0m6.271s 00:13:06.947 ************************************ 00:13:06.947 END TEST xnvme_bdevperf 00:13:06.947 ************************************ 00:13:06.947 09:44:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.947 09:44:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.208 09:44:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:07.208 09:44:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.208 09:44:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.208 09:44:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.208 ************************************ 00:13:07.208 START TEST xnvme_fio_plugin 00:13:07.208 ************************************ 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:07.208 09:44:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.208 { 00:13:07.208 "subsystems": [ 00:13:07.208 { 00:13:07.208 "subsystem": "bdev", 00:13:07.208 "config": [ 00:13:07.208 { 00:13:07.208 "params": { 00:13:07.208 "io_mechanism": "io_uring", 00:13:07.208 "conserve_cpu": false, 00:13:07.208 "filename": "/dev/nvme0n1", 00:13:07.208 "name": "xnvme_bdev" 00:13:07.208 }, 00:13:07.208 "method": "bdev_xnvme_create" 00:13:07.208 }, 00:13:07.208 { 00:13:07.208 "method": "bdev_wait_for_examine" 00:13:07.208 } 00:13:07.208 ] 00:13:07.208 } 00:13:07.208 ] 00:13:07.208 } 00:13:07.208 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:07.208 fio-3.35 00:13:07.208 Starting 1 thread 00:13:13.777 00:13:13.777 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70089: Thu Dec 5 09:45:00 2024 00:13:13.777 read: IOPS=33.4k, BW=130MiB/s (137MB/s)(652MiB/5001msec) 00:13:13.777 slat (nsec): min=2870, max=78239, avg=3273.51, stdev=1649.12 00:13:13.777 clat (usec): min=962, max=3677, avg=1784.51, stdev=305.49 00:13:13.777 lat (usec): min=965, max=3729, avg=1787.79, stdev=305.66 00:13:13.777 clat percentiles (usec): 00:13:13.777 | 1.00th=[ 1205], 5.00th=[ 1336], 10.00th=[ 1418], 20.00th=[ 1516], 00:13:13.777 | 30.00th=[ 1614], 40.00th=[ 1696], 50.00th=[ 1762], 60.00th=[ 1844], 00:13:13.777 | 70.00th=[ 1926], 80.00th=[ 2024], 90.00th=[ 2180], 95.00th=[ 2311], 00:13:13.777 | 99.00th=[ 2671], 99.50th=[ 2802], 99.90th=[ 3130], 99.95th=[ 3228], 00:13:13.777 | 99.99th=[ 3490] 00:13:13.777 bw ( KiB/s): min=131584, max=136192, per=100.00%, avg=134542.22, stdev=1552.50, samples=9 00:13:13.777 iops : min=32896, max=34048, avg=33635.56, stdev=388.13, samples=9 00:13:13.777 lat (usec) : 1000=0.02% 00:13:13.777 lat (msec) : 2=77.33%, 4=22.65% 00:13:13.777 cpu : usr=30.66%, sys=68.36%, ctx=12, majf=0, minf=762 00:13:13.777 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:13.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.777 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:13.777 issued rwts: total=166976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.777 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.777 00:13:13.777 Run status group 0 (all jobs): 00:13:13.777 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=652MiB (684MB), run=5001-5001msec 00:13:13.777 ----------------------------------------------------- 00:13:13.777 Suppressions used: 00:13:13.777 count bytes template 00:13:13.777 1 11 /usr/src/fio/parse.c 00:13:13.777 1 8 libtcmalloc_minimal.so 00:13:13.777 1 904 libcrypto.so 00:13:13.777 ----------------------------------------------------- 00:13:13.777 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:13.777 09:45:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.038 { 00:13:14.038 "subsystems": [ 00:13:14.038 { 00:13:14.038 "subsystem": "bdev", 00:13:14.038 "config": [ 00:13:14.038 { 00:13:14.038 "params": { 00:13:14.038 "io_mechanism": "io_uring", 00:13:14.038 "conserve_cpu": false, 00:13:14.038 "filename": "/dev/nvme0n1", 00:13:14.038 "name": "xnvme_bdev" 00:13:14.038 }, 00:13:14.038 "method": "bdev_xnvme_create" 00:13:14.038 }, 00:13:14.038 { 00:13:14.038 "method": "bdev_wait_for_examine" 00:13:14.038 } 00:13:14.038 ] 00:13:14.038 } 00:13:14.038 ] 00:13:14.038 } 00:13:14.038 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:14.038 fio-3.35 00:13:14.038 Starting 1 thread 00:13:20.617 00:13:20.617 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70181: Thu Dec 5 09:45:07 2024 00:13:20.617 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(659MiB/5002msec); 0 zone resets 00:13:20.617 slat (nsec): min=2893, max=50550, avg=3324.38, stdev=1619.63 00:13:20.617 clat (usec): min=520, max=10306, avg=1763.53, stdev=353.15 00:13:20.617 lat (usec): min=523, max=10328, avg=1766.85, stdev=353.34 00:13:20.617 clat percentiles (usec): 00:13:20.617 | 1.00th=[ 1188], 5.00th=[ 1319], 10.00th=[ 1385], 20.00th=[ 1483], 00:13:20.617 | 30.00th=[ 1565], 40.00th=[ 1647], 50.00th=[ 1729], 60.00th=[ 1811], 00:13:20.617 | 70.00th=[ 1909], 80.00th=[ 2024], 90.00th=[ 2180], 95.00th=[ 2311], 00:13:20.617 | 99.00th=[ 2638], 99.50th=[ 2769], 99.90th=[ 3097], 99.95th=[ 3720], 00:13:20.617 | 99.99th=[10290] 00:13:20.617 bw ( KiB/s): min=122496, max=143360, per=99.87%, avg=134757.33, stdev=7970.67, samples=9 00:13:20.617 iops : min=30624, max=35840, avg=33689.33, stdev=1992.67, samples=9 00:13:20.617 lat (usec) : 750=0.01%, 1000=0.04% 00:13:20.617 lat (msec) : 2=78.27%, 4=21.64%, 10=0.01%, 20=0.04% 00:13:20.617 cpu : usr=33.05%, sys=65.95%, ctx=17, majf=0, minf=763 00:13:20.617 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:20.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.617 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:20.617 issued rwts: total=0,168736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.617 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:20.617 00:13:20.617 Run status group 0 (all jobs): 00:13:20.617 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (691MB), run=5002-5002msec 00:13:20.617 ----------------------------------------------------- 00:13:20.617 Suppressions used: 00:13:20.617 count bytes template 00:13:20.617 1 11 /usr/src/fio/parse.c 00:13:20.617 1 8 libtcmalloc_minimal.so 00:13:20.617 1 904 libcrypto.so 00:13:20.618 ----------------------------------------------------- 00:13:20.618 00:13:20.618 00:13:20.618 real 0m13.567s 00:13:20.618 user 0m5.914s 00:13:20.618 sys 0m7.229s 00:13:20.618 ************************************ 00:13:20.618 END TEST xnvme_fio_plugin 00:13:20.618 ************************************ 00:13:20.618 09:45:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.618 09:45:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:20.618 09:45:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:20.618 09:45:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:20.618 09:45:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:20.618 09:45:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:20.618 09:45:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.618 09:45:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.618 09:45:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.877 ************************************ 00:13:20.877 START TEST xnvme_rpc 00:13:20.878 ************************************ 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70267 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70267 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70267 ']' 00:13:20.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.878 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:20.878 [2024-12-05 09:45:08.319012] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:20.878 [2024-12-05 09:45:08.319136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70267 ] 00:13:20.878 [2024-12-05 09:45:08.478369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.137 [2024-12-05 09:45:08.574009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.706 xnvme_bdev 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.706 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70267 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70267 ']' 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70267 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.707 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70267 00:13:21.965 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.965 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.965 killing process with pid 70267 00:13:21.965 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70267' 00:13:21.965 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70267 00:13:21.965 09:45:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70267 00:13:23.351 00:13:23.351 real 0m2.596s 00:13:23.351 user 0m2.720s 00:13:23.351 sys 0m0.330s 00:13:23.351 09:45:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.351 ************************************ 00:13:23.351 END TEST xnvme_rpc 00:13:23.351 ************************************ 00:13:23.351 09:45:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.351 09:45:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:23.351 09:45:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:23.351 09:45:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.351 09:45:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.351 ************************************ 00:13:23.351 START TEST xnvme_bdevperf 00:13:23.351 ************************************ 00:13:23.351 09:45:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:23.351 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:23.351 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:23.351 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:23.351 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:23.351 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:23.351 09:45:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:23.351 09:45:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:23.351 { 00:13:23.351 "subsystems": [ 00:13:23.351 { 00:13:23.351 "subsystem": "bdev", 00:13:23.351 "config": [ 00:13:23.351 { 00:13:23.351 "params": { 00:13:23.351 "io_mechanism": "io_uring", 00:13:23.351 "conserve_cpu": true, 00:13:23.351 "filename": "/dev/nvme0n1", 00:13:23.351 "name": "xnvme_bdev" 00:13:23.351 }, 00:13:23.351 "method": "bdev_xnvme_create" 00:13:23.351 }, 00:13:23.351 { 00:13:23.351 "method": "bdev_wait_for_examine" 00:13:23.351 } 00:13:23.351 ] 00:13:23.351 } 00:13:23.351 ] 00:13:23.351 } 00:13:23.351 [2024-12-05 09:45:10.967036] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:23.351 [2024-12-05 09:45:10.967154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70336 ] 00:13:23.612 [2024-12-05 09:45:11.127455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.873 [2024-12-05 09:45:11.248437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.136 Running I/O for 5 seconds... 00:13:26.023 31903.00 IOPS, 124.62 MiB/s [2024-12-05T09:45:14.592Z] 34624.50 IOPS, 135.25 MiB/s [2024-12-05T09:45:15.972Z] 35670.00 IOPS, 139.34 MiB/s [2024-12-05T09:45:16.544Z] 36189.75 IOPS, 141.37 MiB/s [2024-12-05T09:45:16.803Z] 36463.40 IOPS, 142.44 MiB/s 00:13:29.174 Latency(us) 00:13:29.174 [2024-12-05T09:45:16.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.174 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:29.174 xnvme_bdev : 5.01 36432.10 142.31 0.00 0.00 1753.06 696.32 11443.59 00:13:29.174 [2024-12-05T09:45:16.803Z] =================================================================================================================== 00:13:29.174 [2024-12-05T09:45:16.803Z] Total : 36432.10 142.31 0.00 0.00 1753.06 696.32 11443.59 00:13:29.743 09:45:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.743 09:45:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:29.743 09:45:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:29.743 09:45:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:29.743 09:45:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:29.743 { 00:13:29.743 "subsystems": [ 00:13:29.743 { 00:13:29.743 "subsystem": "bdev", 00:13:29.743 "config": [ 00:13:29.743 { 00:13:29.743 "params": { 00:13:29.743 "io_mechanism": "io_uring", 00:13:29.743 "conserve_cpu": true, 00:13:29.743 "filename": "/dev/nvme0n1", 00:13:29.743 "name": "xnvme_bdev" 00:13:29.743 }, 00:13:29.743 "method": "bdev_xnvme_create" 00:13:29.743 }, 00:13:29.743 { 00:13:29.743 "method": "bdev_wait_for_examine" 00:13:29.743 } 00:13:29.743 ] 00:13:29.743 } 00:13:29.743 ] 00:13:29.743 } 00:13:29.743 [2024-12-05 09:45:17.320201] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:29.743 [2024-12-05 09:45:17.320318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70411 ] 00:13:30.003 [2024-12-05 09:45:17.481641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.003 [2024-12-05 09:45:17.577491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.261 Running I/O for 5 seconds... 00:13:32.659 39636.00 IOPS, 154.83 MiB/s [2024-12-05T09:45:20.859Z] 39760.00 IOPS, 155.31 MiB/s [2024-12-05T09:45:22.240Z] 39466.00 IOPS, 154.16 MiB/s [2024-12-05T09:45:23.181Z] 39444.50 IOPS, 154.08 MiB/s [2024-12-05T09:45:23.182Z] 38480.00 IOPS, 150.31 MiB/s 00:13:35.553 Latency(us) 00:13:35.553 [2024-12-05T09:45:23.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.553 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:35.553 xnvme_bdev : 5.00 38462.65 150.24 0.00 0.00 1660.17 318.23 7914.73 00:13:35.553 [2024-12-05T09:45:23.182Z] =================================================================================================================== 00:13:35.553 [2024-12-05T09:45:23.182Z] Total : 38462.65 150.24 0.00 0.00 1660.17 318.23 7914.73 00:13:36.128 00:13:36.128 real 0m12.711s 00:13:36.128 user 0m9.936s 00:13:36.128 sys 0m2.291s 00:13:36.128 ************************************ 00:13:36.128 09:45:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.128 09:45:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:36.128 END TEST xnvme_bdevperf 00:13:36.128 ************************************ 00:13:36.128 09:45:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:36.128 09:45:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:36.128 09:45:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.128 09:45:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.128 ************************************ 00:13:36.128 START TEST xnvme_fio_plugin 00:13:36.128 ************************************ 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:36.128 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.128 { 00:13:36.128 "subsystems": [ 00:13:36.128 { 00:13:36.128 "subsystem": "bdev", 00:13:36.128 "config": [ 00:13:36.128 { 00:13:36.128 "params": { 00:13:36.128 "io_mechanism": "io_uring", 00:13:36.128 "conserve_cpu": true, 00:13:36.128 "filename": "/dev/nvme0n1", 00:13:36.128 "name": "xnvme_bdev" 00:13:36.128 }, 00:13:36.128 "method": "bdev_xnvme_create" 00:13:36.128 }, 00:13:36.128 { 00:13:36.128 "method": "bdev_wait_for_examine" 00:13:36.128 } 00:13:36.128 ] 00:13:36.128 } 00:13:36.128 ] 00:13:36.128 } 00:13:36.389 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:36.389 fio-3.35 00:13:36.389 Starting 1 thread 00:13:42.960 00:13:42.960 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70529: Thu Dec 5 09:45:29 2024 00:13:42.960 read: IOPS=36.9k, BW=144MiB/s (151MB/s)(722MiB/5002msec) 00:13:42.960 slat (nsec): min=2890, max=67734, avg=3289.50, stdev=1310.05 00:13:42.960 clat (usec): min=861, max=5534, avg=1604.01, stdev=300.36 00:13:42.960 lat (usec): min=864, max=5540, avg=1607.29, stdev=300.54 00:13:42.960 clat percentiles (usec): 00:13:42.960 | 1.00th=[ 1074], 5.00th=[ 1172], 10.00th=[ 1254], 20.00th=[ 1352], 00:13:42.960 | 30.00th=[ 1434], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1663], 00:13:42.960 | 70.00th=[ 1745], 80.00th=[ 1827], 90.00th=[ 1975], 95.00th=[ 2114], 00:13:42.960 | 99.00th=[ 2442], 99.50th=[ 2573], 99.90th=[ 3032], 99.95th=[ 3359], 00:13:42.960 | 99.99th=[ 5473] 00:13:42.960 bw ( KiB/s): min=141824, max=150016, per=99.76%, avg=147427.56, stdev=2516.25, samples=9 00:13:42.960 iops : min=35456, max=37504, avg=36856.89, stdev=629.06, samples=9 00:13:42.960 lat (usec) : 1000=0.21% 00:13:42.960 lat (msec) : 2=91.15%, 4=8.60%, 10=0.03% 00:13:42.960 cpu : usr=70.87%, sys=26.65%, ctx=13, majf=0, minf=762 00:13:42.960 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:42.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.960 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:42.960 issued rwts: total=184800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.960 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.960 00:13:42.960 Run status group 0 (all jobs): 00:13:42.960 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=722MiB (757MB), run=5002-5002msec 00:13:42.960 ----------------------------------------------------- 00:13:42.960 Suppressions used: 00:13:42.960 count bytes template 00:13:42.960 1 11 /usr/src/fio/parse.c 00:13:42.960 1 8 libtcmalloc_minimal.so 00:13:42.960 1 904 libcrypto.so 00:13:42.960 ----------------------------------------------------- 00:13:42.960 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:42.960 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.960 { 00:13:42.960 "subsystems": [ 00:13:42.960 { 00:13:42.960 "subsystem": "bdev", 00:13:42.960 "config": [ 00:13:42.960 { 00:13:42.960 "params": { 00:13:42.960 "io_mechanism": "io_uring", 00:13:42.960 "conserve_cpu": true, 00:13:42.960 "filename": "/dev/nvme0n1", 00:13:42.960 "name": "xnvme_bdev" 00:13:42.960 }, 00:13:42.960 "method": "bdev_xnvme_create" 00:13:42.960 }, 00:13:42.960 { 00:13:42.960 "method": "bdev_wait_for_examine" 00:13:42.960 } 00:13:42.960 ] 00:13:42.960 } 00:13:42.960 ] 00:13:42.960 } 00:13:43.220 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:43.220 fio-3.35 00:13:43.220 Starting 1 thread 00:13:49.801 00:13:49.801 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70616: Thu Dec 5 09:45:36 2024 00:13:49.801 write: IOPS=36.8k, BW=144MiB/s (151MB/s)(720MiB/5005msec); 0 zone resets 00:13:49.801 slat (usec): min=2, max=553, avg= 3.30, stdev= 2.08 00:13:49.801 clat (usec): min=807, max=6710, avg=1605.60, stdev=310.47 00:13:49.801 lat (usec): min=810, max=6713, avg=1608.90, stdev=310.67 00:13:49.801 clat percentiles (usec): 00:13:49.801 | 1.00th=[ 1057], 5.00th=[ 1156], 10.00th=[ 1221], 20.00th=[ 1319], 00:13:49.801 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1663], 00:13:49.801 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2147], 00:13:49.801 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2933], 99.95th=[ 3195], 00:13:49.801 | 99.99th=[ 3752] 00:13:49.801 bw ( KiB/s): min=131192, max=159104, per=100.00%, avg=148904.00, stdev=9479.24, samples=9 00:13:49.801 iops : min=32798, max=39776, avg=37226.22, stdev=2369.82, samples=9 00:13:49.801 lat (usec) : 1000=0.29% 00:13:49.801 lat (msec) : 2=89.12%, 4=10.59%, 10=0.01% 00:13:49.801 cpu : usr=75.02%, sys=21.52%, ctx=35, majf=0, minf=763 00:13:49.801 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:49.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.801 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:49.801 issued rwts: total=0,184406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.801 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:49.801 00:13:49.801 Run status group 0 (all jobs): 00:13:49.801 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=720MiB (755MB), run=5005-5005msec 00:13:49.801 ----------------------------------------------------- 00:13:49.801 Suppressions used: 00:13:49.801 count bytes template 00:13:49.801 1 11 /usr/src/fio/parse.c 00:13:49.801 1 8 libtcmalloc_minimal.so 00:13:49.801 1 904 libcrypto.so 00:13:49.801 ----------------------------------------------------- 00:13:49.801 00:13:49.801 00:13:49.801 real 0m13.667s 00:13:49.801 user 0m10.060s 00:13:49.801 sys 0m2.979s 00:13:49.801 09:45:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.801 ************************************ 00:13:49.801 END TEST xnvme_fio_plugin 00:13:49.801 ************************************ 00:13:49.801 09:45:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:49.801 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:49.801 09:45:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.801 09:45:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.801 09:45:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.801 ************************************ 00:13:49.801 START TEST xnvme_rpc 00:13:49.801 ************************************ 00:13:49.801 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:49.801 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:49.801 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:49.801 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:49.801 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70697 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70697 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70697 ']' 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.802 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:50.061 [2024-12-05 09:45:37.501640] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:50.061 [2024-12-05 09:45:37.501782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70697 ] 00:13:50.061 [2024-12-05 09:45:37.662195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.322 [2024-12-05 09:45:37.761128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.893 xnvme_bdev 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:50.893 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.894 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70697 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70697 ']' 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70697 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70697 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.154 killing process with pid 70697 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70697' 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70697 00:13:51.154 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70697 00:13:53.055 00:13:53.055 real 0m2.745s 00:13:53.055 user 0m2.768s 00:13:53.055 sys 0m0.448s 00:13:53.055 09:45:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.055 ************************************ 00:13:53.055 END TEST xnvme_rpc 00:13:53.055 ************************************ 00:13:53.055 09:45:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.055 09:45:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:53.055 09:45:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:53.055 09:45:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.055 09:45:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.055 ************************************ 00:13:53.055 START TEST xnvme_bdevperf 00:13:53.055 ************************************ 00:13:53.055 09:45:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:53.055 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:53.055 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:53.055 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.055 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:53.055 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:53.055 09:45:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:53.055 09:45:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:53.055 { 00:13:53.055 "subsystems": [ 00:13:53.055 { 00:13:53.055 "subsystem": "bdev", 00:13:53.055 "config": [ 00:13:53.055 { 00:13:53.055 "params": { 00:13:53.055 "io_mechanism": "io_uring_cmd", 00:13:53.055 "conserve_cpu": false, 00:13:53.055 "filename": "/dev/ng0n1", 00:13:53.055 "name": "xnvme_bdev" 00:13:53.055 }, 00:13:53.055 "method": "bdev_xnvme_create" 00:13:53.055 }, 00:13:53.055 { 00:13:53.055 "method": "bdev_wait_for_examine" 00:13:53.055 } 00:13:53.055 ] 00:13:53.055 } 00:13:53.055 ] 00:13:53.055 } 00:13:53.055 [2024-12-05 09:45:40.284800] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:53.055 [2024-12-05 09:45:40.284915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70771 ] 00:13:53.055 [2024-12-05 09:45:40.445557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.055 [2024-12-05 09:45:40.546724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.316 Running I/O for 5 seconds... 00:13:55.202 36613.00 IOPS, 143.02 MiB/s [2024-12-05T09:45:44.214Z] 34380.00 IOPS, 134.30 MiB/s [2024-12-05T09:45:45.158Z] 33686.00 IOPS, 131.59 MiB/s [2024-12-05T09:45:46.099Z] 33309.00 IOPS, 130.11 MiB/s 00:13:58.470 Latency(us) 00:13:58.470 [2024-12-05T09:45:46.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.470 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:58.470 xnvme_bdev : 5.00 33056.54 129.13 0.00 0.00 1932.27 667.96 10637.00 00:13:58.470 [2024-12-05T09:45:46.099Z] =================================================================================================================== 00:13:58.470 [2024-12-05T09:45:46.099Z] Total : 33056.54 129.13 0.00 0.00 1932.27 667.96 10637.00 00:13:59.040 09:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.040 09:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:59.040 09:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:59.040 09:45:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:59.040 09:45:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.040 { 00:13:59.040 "subsystems": [ 00:13:59.040 { 00:13:59.040 "subsystem": "bdev", 00:13:59.040 "config": [ 00:13:59.040 { 00:13:59.040 "params": { 00:13:59.040 "io_mechanism": "io_uring_cmd", 00:13:59.040 "conserve_cpu": false, 00:13:59.040 "filename": "/dev/ng0n1", 00:13:59.040 "name": "xnvme_bdev" 00:13:59.040 }, 00:13:59.040 "method": "bdev_xnvme_create" 00:13:59.040 }, 00:13:59.040 { 00:13:59.040 "method": "bdev_wait_for_examine" 00:13:59.040 } 00:13:59.040 ] 00:13:59.040 } 00:13:59.040 ] 00:13:59.040 } 00:13:59.301 [2024-12-05 09:45:46.667522] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:59.301 [2024-12-05 09:45:46.667658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70845 ] 00:13:59.301 [2024-12-05 09:45:46.833470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.562 [2024-12-05 09:45:46.953113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.824 Running I/O for 5 seconds... 00:14:01.712 34235.00 IOPS, 133.73 MiB/s [2024-12-05T09:45:50.283Z] 34057.50 IOPS, 133.04 MiB/s [2024-12-05T09:45:51.671Z] 33951.33 IOPS, 132.62 MiB/s [2024-12-05T09:45:52.612Z] 33844.75 IOPS, 132.21 MiB/s [2024-12-05T09:45:52.612Z] 33829.00 IOPS, 132.14 MiB/s 00:14:04.983 Latency(us) 00:14:04.983 [2024-12-05T09:45:52.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.983 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:04.983 xnvme_bdev : 5.00 33808.55 132.06 0.00 0.00 1889.15 381.24 5167.26 00:14:04.983 [2024-12-05T09:45:52.612Z] =================================================================================================================== 00:14:04.983 [2024-12-05T09:45:52.612Z] Total : 33808.55 132.06 0.00 0.00 1889.15 381.24 5167.26 00:14:05.555 09:45:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:05.555 09:45:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:05.555 09:45:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:05.555 09:45:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:05.555 09:45:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:05.555 { 00:14:05.555 "subsystems": [ 00:14:05.555 { 00:14:05.555 "subsystem": "bdev", 00:14:05.555 "config": [ 00:14:05.555 { 00:14:05.555 "params": { 00:14:05.555 "io_mechanism": "io_uring_cmd", 00:14:05.555 "conserve_cpu": false, 00:14:05.555 "filename": "/dev/ng0n1", 00:14:05.555 "name": "xnvme_bdev" 00:14:05.555 }, 00:14:05.555 "method": "bdev_xnvme_create" 00:14:05.555 }, 00:14:05.555 { 00:14:05.555 "method": "bdev_wait_for_examine" 00:14:05.555 } 00:14:05.555 ] 00:14:05.555 } 00:14:05.555 ] 00:14:05.555 } 00:14:05.555 [2024-12-05 09:45:53.109767] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:05.555 [2024-12-05 09:45:53.109911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70922 ] 00:14:05.816 [2024-12-05 09:45:53.274696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.816 [2024-12-05 09:45:53.394694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.077 Running I/O for 5 seconds... 00:14:08.093 78464.00 IOPS, 306.50 MiB/s [2024-12-05T09:45:57.109Z] 78400.00 IOPS, 306.25 MiB/s [2024-12-05T09:45:58.054Z] 78485.33 IOPS, 306.58 MiB/s [2024-12-05T09:45:58.997Z] 81968.00 IOPS, 320.19 MiB/s 00:14:11.368 Latency(us) 00:14:11.368 [2024-12-05T09:45:58.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.368 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:11.368 xnvme_bdev : 5.00 81502.87 318.37 0.00 0.00 781.87 523.03 4612.73 00:14:11.368 [2024-12-05T09:45:58.997Z] =================================================================================================================== 00:14:11.368 [2024-12-05T09:45:58.997Z] Total : 81502.87 318.37 0.00 0.00 781.87 523.03 4612.73 00:14:11.941 09:45:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:11.941 09:45:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:11.941 09:45:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:11.941 09:45:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:11.941 09:45:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.941 { 00:14:11.941 "subsystems": [ 00:14:11.941 { 00:14:11.941 "subsystem": "bdev", 00:14:11.941 "config": [ 00:14:11.941 { 00:14:11.941 "params": { 00:14:11.941 "io_mechanism": "io_uring_cmd", 00:14:11.941 "conserve_cpu": false, 00:14:11.941 "filename": "/dev/ng0n1", 00:14:11.941 "name": "xnvme_bdev" 00:14:11.941 }, 00:14:11.941 "method": "bdev_xnvme_create" 00:14:11.941 }, 00:14:11.941 { 00:14:11.941 "method": "bdev_wait_for_examine" 00:14:11.941 } 00:14:11.941 ] 00:14:11.941 } 00:14:11.941 ] 00:14:11.941 } 00:14:11.941 [2024-12-05 09:45:59.363205] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:11.941 [2024-12-05 09:45:59.363319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70995 ] 00:14:11.941 [2024-12-05 09:45:59.521185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.202 [2024-12-05 09:45:59.604479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.203 Running I/O for 5 seconds... 00:14:14.533 56746.00 IOPS, 221.66 MiB/s [2024-12-05T09:46:03.105Z] 45304.00 IOPS, 176.97 MiB/s [2024-12-05T09:46:04.048Z] 38931.33 IOPS, 152.08 MiB/s [2024-12-05T09:46:04.989Z] 35481.00 IOPS, 138.60 MiB/s [2024-12-05T09:46:04.989Z] 33446.80 IOPS, 130.65 MiB/s 00:14:17.360 Latency(us) 00:14:17.360 [2024-12-05T09:46:04.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.360 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:17.360 xnvme_bdev : 5.01 33424.22 130.56 0.00 0.00 1910.01 60.65 24500.38 00:14:17.360 [2024-12-05T09:46:04.989Z] =================================================================================================================== 00:14:17.360 [2024-12-05T09:46:04.989Z] Total : 33424.22 130.56 0.00 0.00 1910.01 60.65 24500.38 00:14:18.304 00:14:18.304 real 0m25.375s 00:14:18.304 user 0m13.992s 00:14:18.304 sys 0m10.902s 00:14:18.304 09:46:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.304 09:46:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.304 ************************************ 00:14:18.304 END TEST xnvme_bdevperf 00:14:18.304 ************************************ 00:14:18.304 09:46:05 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:18.304 09:46:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.304 09:46:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.304 09:46:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.304 ************************************ 00:14:18.304 START TEST xnvme_fio_plugin 00:14:18.304 ************************************ 00:14:18.304 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:18.304 09:46:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:18.304 09:46:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:18.304 09:46:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.304 09:46:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.304 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.305 09:46:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.305 { 00:14:18.305 "subsystems": [ 00:14:18.305 { 00:14:18.305 "subsystem": "bdev", 00:14:18.305 "config": [ 00:14:18.305 { 00:14:18.305 "params": { 00:14:18.305 "io_mechanism": "io_uring_cmd", 00:14:18.305 "conserve_cpu": false, 00:14:18.305 "filename": "/dev/ng0n1", 00:14:18.305 "name": "xnvme_bdev" 00:14:18.305 }, 00:14:18.305 "method": "bdev_xnvme_create" 00:14:18.305 }, 00:14:18.305 { 00:14:18.305 "method": "bdev_wait_for_examine" 00:14:18.305 } 00:14:18.305 ] 00:14:18.305 } 00:14:18.305 ] 00:14:18.305 } 00:14:18.305 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:18.305 fio-3.35 00:14:18.305 Starting 1 thread 00:14:24.893 00:14:24.893 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71113: Thu Dec 5 09:46:11 2024 00:14:24.893 read: IOPS=41.8k, BW=163MiB/s (171MB/s)(817MiB/5002msec) 00:14:24.893 slat (nsec): min=2884, max=90348, avg=3376.39, stdev=1510.51 00:14:24.893 clat (usec): min=874, max=3652, avg=1396.91, stdev=275.40 00:14:24.893 lat (usec): min=877, max=3688, avg=1400.29, stdev=275.83 00:14:24.893 clat percentiles (usec): 00:14:24.893 | 1.00th=[ 996], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1172], 00:14:24.893 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1336], 60.00th=[ 1401], 00:14:24.893 | 70.00th=[ 1500], 80.00th=[ 1614], 90.00th=[ 1762], 95.00th=[ 1909], 00:14:24.893 | 99.00th=[ 2212], 99.50th=[ 2376], 99.90th=[ 2868], 99.95th=[ 3064], 00:14:24.893 | 99.99th=[ 3425] 00:14:24.893 bw ( KiB/s): min=150528, max=188928, per=100.00%, avg=170211.56, stdev=12643.16, samples=9 00:14:24.893 iops : min=37632, max=47232, avg=42552.89, stdev=3160.79, samples=9 00:14:24.893 lat (usec) : 1000=1.11% 00:14:24.893 lat (msec) : 2=95.57%, 4=3.32% 00:14:24.893 cpu : usr=38.95%, sys=60.01%, ctx=12, majf=0, minf=762 00:14:24.893 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:24.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.893 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:24.893 issued rwts: total=209216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:24.893 00:14:24.893 Run status group 0 (all jobs): 00:14:24.893 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=817MiB (857MB), run=5002-5002msec 00:14:25.155 ----------------------------------------------------- 00:14:25.155 Suppressions used: 00:14:25.155 count bytes template 00:14:25.155 1 11 /usr/src/fio/parse.c 00:14:25.155 1 8 libtcmalloc_minimal.so 00:14:25.155 1 904 libcrypto.so 00:14:25.155 ----------------------------------------------------- 00:14:25.155 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:25.155 09:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.155 { 00:14:25.155 "subsystems": [ 00:14:25.155 { 00:14:25.155 "subsystem": "bdev", 00:14:25.155 "config": [ 00:14:25.155 { 00:14:25.155 "params": { 00:14:25.155 "io_mechanism": "io_uring_cmd", 00:14:25.155 "conserve_cpu": false, 00:14:25.155 "filename": "/dev/ng0n1", 00:14:25.155 "name": "xnvme_bdev" 00:14:25.155 }, 00:14:25.155 "method": "bdev_xnvme_create" 00:14:25.155 }, 00:14:25.155 { 00:14:25.155 "method": "bdev_wait_for_examine" 00:14:25.155 } 00:14:25.155 ] 00:14:25.155 } 00:14:25.155 ] 00:14:25.155 } 00:14:25.155 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:25.155 fio-3.35 00:14:25.155 Starting 1 thread 00:14:31.744 00:14:31.744 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71203: Thu Dec 5 09:46:18 2024 00:14:31.744 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(762MiB/5001msec); 0 zone resets 00:14:31.744 slat (nsec): min=2910, max=78350, avg=3944.50, stdev=2027.74 00:14:31.744 clat (usec): min=145, max=7712, avg=1487.67, stdev=311.64 00:14:31.744 lat (usec): min=149, max=7716, avg=1491.61, stdev=312.09 00:14:31.744 clat percentiles (usec): 00:14:31.744 | 1.00th=[ 889], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1221], 00:14:31.744 | 30.00th=[ 1303], 40.00th=[ 1385], 50.00th=[ 1467], 60.00th=[ 1549], 00:14:31.744 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1860], 95.00th=[ 2008], 00:14:31.744 | 99.00th=[ 2343], 99.50th=[ 2540], 99.90th=[ 3326], 99.95th=[ 3589], 00:14:31.744 | 99.99th=[ 4686] 00:14:31.745 bw ( KiB/s): min=141480, max=178896, per=100.00%, avg=157148.44, stdev=15433.10, samples=9 00:14:31.745 iops : min=35370, max=44724, avg=39287.11, stdev=3858.27, samples=9 00:14:31.745 lat (usec) : 250=0.01%, 500=0.17%, 750=0.25%, 1000=2.16% 00:14:31.745 lat (msec) : 2=92.29%, 4=5.10%, 10=0.03% 00:14:31.745 cpu : usr=37.62%, sys=61.08%, ctx=15, majf=0, minf=763 00:14:31.745 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.8%, 16=23.9%, 32=52.5%, >=64=1.7% 00:14:31.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.745 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:31.745 issued rwts: total=0,194974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:31.745 00:14:31.745 Run status group 0 (all jobs): 00:14:31.745 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=762MiB (799MB), run=5001-5001msec 00:14:32.007 ----------------------------------------------------- 00:14:32.007 Suppressions used: 00:14:32.007 count bytes template 00:14:32.007 1 11 /usr/src/fio/parse.c 00:14:32.007 1 8 libtcmalloc_minimal.so 00:14:32.007 1 904 libcrypto.so 00:14:32.007 ----------------------------------------------------- 00:14:32.007 00:14:32.007 ************************************ 00:14:32.007 END TEST xnvme_fio_plugin 00:14:32.007 ************************************ 00:14:32.007 00:14:32.007 real 0m13.773s 00:14:32.007 user 0m6.682s 00:14:32.007 sys 0m6.653s 00:14:32.007 09:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.007 09:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:32.007 09:46:19 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:32.007 09:46:19 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:32.007 09:46:19 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:32.007 09:46:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:32.007 09:46:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.007 09:46:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.007 09:46:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.007 ************************************ 00:14:32.007 START TEST xnvme_rpc 00:14:32.007 ************************************ 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71284 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71284 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71284 ']' 00:14:32.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.007 09:46:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:32.007 [2024-12-05 09:46:19.602366] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:32.007 [2024-12-05 09:46:19.602531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:14:32.269 [2024-12-05 09:46:19.767274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.269 [2024-12-05 09:46:19.885404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 xnvme_bdev 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71284 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71284 ']' 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71284 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71284 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.213 killing process with pid 71284 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71284' 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71284 00:14:33.213 09:46:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71284 00:14:35.130 ************************************ 00:14:35.130 END TEST xnvme_rpc 00:14:35.130 ************************************ 00:14:35.130 00:14:35.130 real 0m2.904s 00:14:35.130 user 0m2.879s 00:14:35.130 sys 0m0.507s 00:14:35.130 09:46:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.130 09:46:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.130 09:46:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:35.130 09:46:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:35.130 09:46:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.130 09:46:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.130 ************************************ 00:14:35.130 START TEST xnvme_bdevperf 00:14:35.130 ************************************ 00:14:35.130 09:46:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:35.130 09:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:35.130 09:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:35.130 09:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:35.130 09:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:35.130 09:46:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:35.130 09:46:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:35.130 09:46:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:35.130 { 00:14:35.130 "subsystems": [ 00:14:35.130 { 00:14:35.130 "subsystem": "bdev", 00:14:35.130 "config": [ 00:14:35.130 { 00:14:35.130 "params": { 00:14:35.130 "io_mechanism": "io_uring_cmd", 00:14:35.130 "conserve_cpu": true, 00:14:35.130 "filename": "/dev/ng0n1", 00:14:35.130 "name": "xnvme_bdev" 00:14:35.130 }, 00:14:35.130 "method": "bdev_xnvme_create" 00:14:35.130 }, 00:14:35.130 { 00:14:35.130 "method": "bdev_wait_for_examine" 00:14:35.130 } 00:14:35.130 ] 00:14:35.130 } 00:14:35.130 ] 00:14:35.130 } 00:14:35.130 [2024-12-05 09:46:22.565168] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:35.130 [2024-12-05 09:46:22.565503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71352 ] 00:14:35.130 [2024-12-05 09:46:22.730115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.392 [2024-12-05 09:46:22.850120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.653 Running I/O for 5 seconds... 00:14:37.541 34679.00 IOPS, 135.46 MiB/s [2024-12-05T09:46:26.550Z] 35613.50 IOPS, 139.12 MiB/s [2024-12-05T09:46:27.490Z] 36809.33 IOPS, 143.79 MiB/s [2024-12-05T09:46:28.433Z] 36645.00 IOPS, 143.14 MiB/s 00:14:40.804 Latency(us) 00:14:40.804 [2024-12-05T09:46:28.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.804 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:40.804 xnvme_bdev : 5.00 36505.89 142.60 0.00 0.00 1748.97 793.99 13611.32 00:14:40.804 [2024-12-05T09:46:28.433Z] =================================================================================================================== 00:14:40.804 [2024-12-05T09:46:28.433Z] Total : 36505.89 142.60 0.00 0.00 1748.97 793.99 13611.32 00:14:41.378 09:46:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.378 09:46:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:41.378 09:46:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:41.378 09:46:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:41.378 09:46:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:41.378 { 00:14:41.378 "subsystems": [ 00:14:41.378 { 00:14:41.378 "subsystem": "bdev", 00:14:41.378 "config": [ 00:14:41.378 { 00:14:41.378 "params": { 00:14:41.378 "io_mechanism": "io_uring_cmd", 00:14:41.378 "conserve_cpu": true, 00:14:41.378 "filename": "/dev/ng0n1", 00:14:41.378 "name": "xnvme_bdev" 00:14:41.378 }, 00:14:41.378 "method": "bdev_xnvme_create" 00:14:41.378 }, 00:14:41.378 { 00:14:41.378 "method": "bdev_wait_for_examine" 00:14:41.378 } 00:14:41.378 ] 00:14:41.378 } 00:14:41.378 ] 00:14:41.378 } 00:14:41.378 [2024-12-05 09:46:29.001574] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:41.378 [2024-12-05 09:46:29.001729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71432 ] 00:14:41.638 [2024-12-05 09:46:29.166675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.900 [2024-12-05 09:46:29.284193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.161 Running I/O for 5 seconds... 00:14:44.136 36159.00 IOPS, 141.25 MiB/s [2024-12-05T09:46:32.710Z] 35806.50 IOPS, 139.87 MiB/s [2024-12-05T09:46:33.653Z] 35755.00 IOPS, 139.67 MiB/s [2024-12-05T09:46:34.597Z] 35760.50 IOPS, 139.69 MiB/s 00:14:46.968 Latency(us) 00:14:46.968 [2024-12-05T09:46:34.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.968 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:46.968 xnvme_bdev : 5.00 35587.30 139.01 0.00 0.00 1793.70 674.26 8116.38 00:14:46.968 [2024-12-05T09:46:34.597Z] =================================================================================================================== 00:14:46.968 [2024-12-05T09:46:34.597Z] Total : 35587.30 139.01 0.00 0.00 1793.70 674.26 8116.38 00:14:47.912 09:46:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:47.912 09:46:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:47.912 09:46:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:47.912 09:46:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:47.912 09:46:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.912 { 00:14:47.912 "subsystems": [ 00:14:47.912 { 00:14:47.912 "subsystem": "bdev", 00:14:47.912 "config": [ 00:14:47.912 { 00:14:47.912 "params": { 00:14:47.912 "io_mechanism": "io_uring_cmd", 00:14:47.912 "conserve_cpu": true, 00:14:47.912 "filename": "/dev/ng0n1", 00:14:47.912 "name": "xnvme_bdev" 00:14:47.912 }, 00:14:47.912 "method": "bdev_xnvme_create" 00:14:47.912 }, 00:14:47.912 { 00:14:47.912 "method": "bdev_wait_for_examine" 00:14:47.912 } 00:14:47.912 ] 00:14:47.912 } 00:14:47.912 ] 00:14:47.912 } 00:14:47.912 [2024-12-05 09:46:35.452440] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:47.912 [2024-12-05 09:46:35.452818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71506 ] 00:14:48.173 [2024-12-05 09:46:35.615422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.173 [2024-12-05 09:46:35.736730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.434 Running I/O for 5 seconds... 00:14:50.767 79104.00 IOPS, 309.00 MiB/s [2024-12-05T09:46:39.340Z] 79744.00 IOPS, 311.50 MiB/s [2024-12-05T09:46:40.284Z] 79936.00 IOPS, 312.25 MiB/s [2024-12-05T09:46:41.226Z] 82448.00 IOPS, 322.06 MiB/s [2024-12-05T09:46:41.226Z] 84313.60 IOPS, 329.35 MiB/s 00:14:53.597 Latency(us) 00:14:53.597 [2024-12-05T09:46:41.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.597 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:53.597 xnvme_bdev : 5.00 84288.89 329.25 0.00 0.00 755.90 393.85 5343.70 00:14:53.597 [2024-12-05T09:46:41.226Z] =================================================================================================================== 00:14:53.597 [2024-12-05T09:46:41.226Z] Total : 84288.89 329.25 0.00 0.00 755.90 393.85 5343.70 00:14:54.542 09:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.542 09:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:54.542 09:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:54.542 09:46:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:54.542 09:46:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.542 { 00:14:54.542 "subsystems": [ 00:14:54.542 { 00:14:54.542 "subsystem": "bdev", 00:14:54.542 "config": [ 00:14:54.542 { 00:14:54.542 "params": { 00:14:54.542 "io_mechanism": "io_uring_cmd", 00:14:54.542 "conserve_cpu": true, 00:14:54.542 "filename": "/dev/ng0n1", 00:14:54.542 "name": "xnvme_bdev" 00:14:54.542 }, 00:14:54.542 "method": "bdev_xnvme_create" 00:14:54.542 }, 00:14:54.542 { 00:14:54.542 "method": "bdev_wait_for_examine" 00:14:54.542 } 00:14:54.542 ] 00:14:54.542 } 00:14:54.542 ] 00:14:54.542 } 00:14:54.542 [2024-12-05 09:46:41.905362] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:54.542 [2024-12-05 09:46:41.905556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71580 ] 00:14:54.542 [2024-12-05 09:46:42.072000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.803 [2024-12-05 09:46:42.194529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.085 Running I/O for 5 seconds... 00:14:57.008 45753.00 IOPS, 178.72 MiB/s [2024-12-05T09:46:45.575Z] 50316.50 IOPS, 196.55 MiB/s [2024-12-05T09:46:46.517Z] 47769.33 IOPS, 186.60 MiB/s [2024-12-05T09:46:47.906Z] 46328.75 IOPS, 180.97 MiB/s [2024-12-05T09:46:47.906Z] 44983.80 IOPS, 175.72 MiB/s 00:15:00.277 Latency(us) 00:15:00.277 [2024-12-05T09:46:47.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.277 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:00.277 xnvme_bdev : 5.00 44965.92 175.65 0.00 0.00 1418.11 62.62 55251.89 00:15:00.277 [2024-12-05T09:46:47.906Z] =================================================================================================================== 00:15:00.277 [2024-12-05T09:46:47.906Z] Total : 44965.92 175.65 0.00 0.00 1418.11 62.62 55251.89 00:15:00.851 00:15:00.851 real 0m25.857s 00:15:00.851 user 0m16.498s 00:15:00.851 sys 0m7.109s 00:15:00.851 09:46:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.851 09:46:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:00.851 ************************************ 00:15:00.851 END TEST xnvme_bdevperf 00:15:00.851 ************************************ 00:15:00.851 09:46:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:00.851 09:46:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:00.851 09:46:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.851 09:46:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.851 ************************************ 00:15:00.851 START TEST xnvme_fio_plugin 00:15:00.851 ************************************ 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:00.851 09:46:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:00.852 09:46:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:00.852 { 00:15:00.852 "subsystems": [ 00:15:00.852 { 00:15:00.852 "subsystem": "bdev", 00:15:00.852 "config": [ 00:15:00.852 { 00:15:00.852 "params": { 00:15:00.852 "io_mechanism": "io_uring_cmd", 00:15:00.852 "conserve_cpu": true, 00:15:00.852 "filename": "/dev/ng0n1", 00:15:00.852 "name": "xnvme_bdev" 00:15:00.852 }, 00:15:00.852 "method": "bdev_xnvme_create" 00:15:00.852 }, 00:15:00.852 { 00:15:00.852 "method": "bdev_wait_for_examine" 00:15:00.852 } 00:15:00.852 ] 00:15:00.852 } 00:15:00.852 ] 00:15:00.852 } 00:15:01.114 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.114 fio-3.35 00:15:01.114 Starting 1 thread 00:15:07.709 00:15:07.709 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71699: Thu Dec 5 09:46:54 2024 00:15:07.709 read: IOPS=36.2k, BW=141MiB/s (148MB/s)(707MiB/5001msec) 00:15:07.709 slat (nsec): min=2880, max=95887, avg=3649.68, stdev=1989.75 00:15:07.709 clat (usec): min=901, max=7099, avg=1621.45, stdev=351.89 00:15:07.709 lat (usec): min=904, max=7104, avg=1625.10, stdev=352.31 00:15:07.709 clat percentiles (usec): 00:15:07.709 | 1.00th=[ 1045], 5.00th=[ 1156], 10.00th=[ 1221], 20.00th=[ 1336], 00:15:07.709 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1663], 00:15:07.709 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2073], 95.00th=[ 2212], 00:15:07.709 | 99.00th=[ 2540], 99.50th=[ 2737], 99.90th=[ 3228], 99.95th=[ 5211], 00:15:07.709 | 99.99th=[ 7046] 00:15:07.709 bw ( KiB/s): min=130560, max=166912, per=100.00%, avg=145009.78, stdev=13448.60, samples=9 00:15:07.709 iops : min=32640, max=41728, avg=36252.44, stdev=3362.15, samples=9 00:15:07.709 lat (usec) : 1000=0.31% 00:15:07.709 lat (msec) : 2=86.76%, 4=12.86%, 10=0.07% 00:15:07.709 cpu : usr=61.12%, sys=35.82%, ctx=12, majf=0, minf=762 00:15:07.709 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:07.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.709 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:07.709 issued rwts: total=180864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.709 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:07.709 00:15:07.709 Run status group 0 (all jobs): 00:15:07.709 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=707MiB (741MB), run=5001-5001msec 00:15:07.971 ----------------------------------------------------- 00:15:07.971 Suppressions used: 00:15:07.971 count bytes template 00:15:07.971 1 11 /usr/src/fio/parse.c 00:15:07.971 1 8 libtcmalloc_minimal.so 00:15:07.971 1 904 libcrypto.so 00:15:07.971 ----------------------------------------------------- 00:15:07.971 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:07.971 09:46:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.971 { 00:15:07.971 "subsystems": [ 00:15:07.971 { 00:15:07.971 "subsystem": "bdev", 00:15:07.971 "config": [ 00:15:07.971 { 00:15:07.971 "params": { 00:15:07.971 "io_mechanism": "io_uring_cmd", 00:15:07.971 "conserve_cpu": true, 00:15:07.971 "filename": "/dev/ng0n1", 00:15:07.971 "name": "xnvme_bdev" 00:15:07.971 }, 00:15:07.971 "method": "bdev_xnvme_create" 00:15:07.971 }, 00:15:07.971 { 00:15:07.971 "method": "bdev_wait_for_examine" 00:15:07.971 } 00:15:07.971 ] 00:15:07.971 } 00:15:07.971 ] 00:15:07.971 } 00:15:08.231 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:08.231 fio-3.35 00:15:08.231 Starting 1 thread 00:15:14.818 00:15:14.818 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71791: Thu Dec 5 09:47:01 2024 00:15:14.818 write: IOPS=36.4k, BW=142MiB/s (149MB/s)(711MiB/5003msec); 0 zone resets 00:15:14.818 slat (usec): min=2, max=209, avg= 4.09, stdev= 2.44 00:15:14.818 clat (usec): min=502, max=5764, avg=1593.10, stdev=310.65 00:15:14.818 lat (usec): min=505, max=5768, avg=1597.19, stdev=311.21 00:15:14.818 clat percentiles (usec): 00:15:14.818 | 1.00th=[ 1037], 5.00th=[ 1172], 10.00th=[ 1254], 20.00th=[ 1352], 00:15:14.818 | 30.00th=[ 1418], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1631], 00:15:14.818 | 70.00th=[ 1713], 80.00th=[ 1811], 90.00th=[ 1975], 95.00th=[ 2114], 00:15:14.818 | 99.00th=[ 2507], 99.50th=[ 2704], 99.90th=[ 3359], 99.95th=[ 4424], 00:15:14.818 | 99.99th=[ 5473] 00:15:14.818 bw ( KiB/s): min=139056, max=159616, per=100.00%, avg=145623.11, stdev=5692.75, samples=9 00:15:14.818 iops : min=34764, max=39904, avg=36405.78, stdev=1423.19, samples=9 00:15:14.818 lat (usec) : 750=0.01%, 1000=0.50% 00:15:14.818 lat (msec) : 2=90.61%, 4=8.83%, 10=0.06% 00:15:14.818 cpu : usr=56.42%, sys=39.38%, ctx=13, majf=0, minf=763 00:15:14.818 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:14.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.818 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:14.818 issued rwts: total=0,182118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:14.818 00:15:14.818 Run status group 0 (all jobs): 00:15:14.818 WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=711MiB (746MB), run=5003-5003msec 00:15:14.818 ----------------------------------------------------- 00:15:14.818 Suppressions used: 00:15:14.818 count bytes template 00:15:14.818 1 11 /usr/src/fio/parse.c 00:15:14.818 1 8 libtcmalloc_minimal.so 00:15:14.818 1 904 libcrypto.so 00:15:14.818 ----------------------------------------------------- 00:15:14.818 00:15:14.818 ************************************ 00:15:14.818 END TEST xnvme_fio_plugin 00:15:14.818 ************************************ 00:15:14.818 00:15:14.818 real 0m13.985s 00:15:14.818 user 0m8.822s 00:15:14.818 sys 0m4.466s 00:15:14.818 09:47:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.818 09:47:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:15.080 Process with pid 71284 is not found 00:15:15.080 09:47:02 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71284 00:15:15.080 09:47:02 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71284 ']' 00:15:15.080 09:47:02 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71284 00:15:15.080 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71284) - No such process 00:15:15.080 09:47:02 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71284 is not found' 00:15:15.080 ************************************ 00:15:15.080 END TEST nvme_xnvme 00:15:15.080 ************************************ 00:15:15.080 00:15:15.080 real 3m28.950s 00:15:15.080 user 1m59.768s 00:15:15.080 sys 1m14.890s 00:15:15.080 09:47:02 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.080 09:47:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.080 09:47:02 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:15.080 09:47:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:15.080 09:47:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.080 09:47:02 -- common/autotest_common.sh@10 -- # set +x 00:15:15.080 ************************************ 00:15:15.080 START TEST blockdev_xnvme 00:15:15.080 ************************************ 00:15:15.080 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:15.080 * Looking for test storage... 00:15:15.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:15.080 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:15.080 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:15.080 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:15.080 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.080 09:47:02 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:15.080 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.080 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:15.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.080 --rc genhtml_branch_coverage=1 00:15:15.080 --rc genhtml_function_coverage=1 00:15:15.081 --rc genhtml_legend=1 00:15:15.081 --rc geninfo_all_blocks=1 00:15:15.081 --rc geninfo_unexecuted_blocks=1 00:15:15.081 00:15:15.081 ' 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:15.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.081 --rc genhtml_branch_coverage=1 00:15:15.081 --rc genhtml_function_coverage=1 00:15:15.081 --rc genhtml_legend=1 00:15:15.081 --rc geninfo_all_blocks=1 00:15:15.081 --rc geninfo_unexecuted_blocks=1 00:15:15.081 00:15:15.081 ' 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:15.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.081 --rc genhtml_branch_coverage=1 00:15:15.081 --rc genhtml_function_coverage=1 00:15:15.081 --rc genhtml_legend=1 00:15:15.081 --rc geninfo_all_blocks=1 00:15:15.081 --rc geninfo_unexecuted_blocks=1 00:15:15.081 00:15:15.081 ' 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:15.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.081 --rc genhtml_branch_coverage=1 00:15:15.081 --rc genhtml_function_coverage=1 00:15:15.081 --rc genhtml_legend=1 00:15:15.081 --rc geninfo_all_blocks=1 00:15:15.081 --rc geninfo_unexecuted_blocks=1 00:15:15.081 00:15:15.081 ' 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:15.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71926 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71926 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71926 ']' 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.081 09:47:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.081 09:47:02 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:15.342 [2024-12-05 09:47:02.789019] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:15.342 [2024-12-05 09:47:02.789169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71926 ] 00:15:15.342 [2024-12-05 09:47:02.952346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.604 [2024-12-05 09:47:03.073710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.178 09:47:03 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.178 09:47:03 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:16.178 09:47:03 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:16.178 09:47:03 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:16.178 09:47:03 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:16.178 09:47:03 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:16.178 09:47:03 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:16.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:17.354 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:17.354 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:17.354 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:17.354 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:17.354 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:17.354 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:17.355 nvme0n1 00:15:17.355 nvme0n2 00:15:17.355 nvme0n3 00:15:17.355 nvme1n1 00:15:17.355 nvme2n1 00:15:17.355 nvme3n1 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.355 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.355 09:47:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.640 09:47:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.640 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:17.640 09:47:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.640 09:47:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.640 09:47:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.640 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:17.640 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:17.640 09:47:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.640 09:47:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.640 09:47:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.640 09:47:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:17.640 09:47:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:17.640 09:47:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0b2e1ce4-f3fc-4889-9c9b-47cc172e1fb6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0b2e1ce4-f3fc-4889-9c9b-47cc172e1fb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7006c8ea-6319-40fc-8746-0bf8218c93b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7006c8ea-6319-40fc-8746-0bf8218c93b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5b8fbae9-c6e1-4400-bcb0-28fd07586a46"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b8fbae9-c6e1-4400-bcb0-28fd07586a46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1ad734fd-d162-47fc-b0d6-6dd0224ecfb6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1ad734fd-d162-47fc-b0d6-6dd0224ecfb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "185ba7b7-3f74-400d-b8a6-06120a3d0f50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "185ba7b7-3f74-400d-b8a6-06120a3d0f50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6911c971-f63c-409e-a4c8-24872b2dcf8c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6911c971-f63c-409e-a4c8-24872b2dcf8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:17.640 09:47:05 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:17.640 09:47:05 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:17.640 09:47:05 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:17.640 09:47:05 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71926 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71926 ']' 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71926 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71926 00:15:17.640 killing process with pid 71926 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71926' 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71926 00:15:17.640 09:47:05 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71926 00:15:19.546 09:47:06 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:19.546 09:47:06 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:19.546 09:47:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:19.546 09:47:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.547 09:47:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.547 ************************************ 00:15:19.547 START TEST bdev_hello_world 00:15:19.547 ************************************ 00:15:19.547 09:47:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:19.547 [2024-12-05 09:47:06.839957] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:19.547 [2024-12-05 09:47:06.840110] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72210 ] 00:15:19.547 [2024-12-05 09:47:07.007825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.547 [2024-12-05 09:47:07.126861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.118 [2024-12-05 09:47:07.533605] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:20.118 [2024-12-05 09:47:07.533664] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:20.118 [2024-12-05 09:47:07.533682] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:20.118 [2024-12-05 09:47:07.535911] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:20.118 [2024-12-05 09:47:07.537270] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:20.118 [2024-12-05 09:47:07.537579] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:20.118 [2024-12-05 09:47:07.538094] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:20.118 00:15:20.118 [2024-12-05 09:47:07.538132] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:21.061 00:15:21.061 real 0m1.567s 00:15:21.061 user 0m1.185s 00:15:21.061 sys 0m0.233s 00:15:21.061 09:47:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.061 ************************************ 00:15:21.061 END TEST bdev_hello_world 00:15:21.061 ************************************ 00:15:21.061 09:47:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:21.061 09:47:08 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:21.061 09:47:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.061 09:47:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.061 09:47:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.061 ************************************ 00:15:21.061 START TEST bdev_bounds 00:15:21.061 ************************************ 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:21.061 Process bdevio pid: 72245 00:15:21.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72245 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72245' 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72245 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72245 ']' 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:21.061 09:47:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:21.061 [2024-12-05 09:47:08.484000] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:21.061 [2024-12-05 09:47:08.484272] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72245 ] 00:15:21.061 [2024-12-05 09:47:08.656429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:21.322 [2024-12-05 09:47:08.784590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.322 [2024-12-05 09:47:08.784780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.322 [2024-12-05 09:47:08.784879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.895 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.895 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:21.895 09:47:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:21.895 I/O targets: 00:15:21.895 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.895 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.895 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.895 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:21.895 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:21.895 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:21.895 00:15:21.895 00:15:21.895 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.895 http://cunit.sourceforge.net/ 00:15:21.895 00:15:21.895 00:15:21.895 Suite: bdevio tests on: nvme3n1 00:15:21.895 Test: blockdev write read block ...passed 00:15:21.895 Test: blockdev write zeroes read block ...passed 00:15:21.895 Test: blockdev write zeroes read no split ...passed 00:15:21.895 Test: blockdev write zeroes read split ...passed 00:15:21.895 Test: blockdev write zeroes read split partial ...passed 00:15:21.895 Test: blockdev reset ...passed 00:15:21.895 Test: blockdev write read 8 blocks ...passed 00:15:21.895 Test: blockdev write read size > 128k ...passed 00:15:21.895 Test: blockdev write read invalid size ...passed 00:15:21.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:21.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:21.895 Test: blockdev write read max offset ...passed 00:15:21.895 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.895 Test: blockdev writev readv 8 blocks ...passed 00:15:21.895 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.895 Test: blockdev writev readv block ...passed 00:15:21.895 Test: blockdev writev readv size > 128k ...passed 00:15:21.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.895 Test: blockdev comparev and writev ...passed 00:15:21.895 Test: blockdev nvme passthru rw ...passed 00:15:21.895 Test: blockdev nvme passthru vendor specific ...passed 00:15:21.895 Test: blockdev nvme admin passthru ...passed 00:15:21.895 Test: blockdev copy ...passed 00:15:21.895 Suite: bdevio tests on: nvme2n1 00:15:21.895 Test: blockdev write read block ...passed 00:15:21.895 Test: blockdev write zeroes read block ...passed 00:15:21.895 Test: blockdev write zeroes read no split ...passed 00:15:22.155 Test: blockdev write zeroes read split ...passed 00:15:22.156 Test: blockdev write zeroes read split partial ...passed 00:15:22.156 Test: blockdev reset ...passed 00:15:22.156 Test: blockdev write read 8 blocks ...passed 00:15:22.156 Test: blockdev write read size > 128k ...passed 00:15:22.156 Test: blockdev write read invalid size ...passed 00:15:22.156 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.156 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.156 Test: blockdev write read max offset ...passed 00:15:22.156 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.156 Test: blockdev writev readv 8 blocks ...passed 00:15:22.156 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.156 Test: blockdev writev readv block ...passed 00:15:22.156 Test: blockdev writev readv size > 128k ...passed 00:15:22.156 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.156 Test: blockdev comparev and writev ...passed 00:15:22.156 Test: blockdev nvme passthru rw ...passed 00:15:22.156 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.156 Test: blockdev nvme admin passthru ...passed 00:15:22.156 Test: blockdev copy ...passed 00:15:22.156 Suite: bdevio tests on: nvme1n1 00:15:22.156 Test: blockdev write read block ...passed 00:15:22.156 Test: blockdev write zeroes read block ...passed 00:15:22.156 Test: blockdev write zeroes read no split ...passed 00:15:22.156 Test: blockdev write zeroes read split ...passed 00:15:22.156 Test: blockdev write zeroes read split partial ...passed 00:15:22.156 Test: blockdev reset ...passed 00:15:22.156 Test: blockdev write read 8 blocks ...passed 00:15:22.156 Test: blockdev write read size > 128k ...passed 00:15:22.156 Test: blockdev write read invalid size ...passed 00:15:22.156 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.156 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.156 Test: blockdev write read max offset ...passed 00:15:22.156 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.156 Test: blockdev writev readv 8 blocks ...passed 00:15:22.156 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.156 Test: blockdev writev readv block ...passed 00:15:22.156 Test: blockdev writev readv size > 128k ...passed 00:15:22.156 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.156 Test: blockdev comparev and writev ...passed 00:15:22.156 Test: blockdev nvme passthru rw ...passed 00:15:22.156 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.156 Test: blockdev nvme admin passthru ...passed 00:15:22.156 Test: blockdev copy ...passed 00:15:22.156 Suite: bdevio tests on: nvme0n3 00:15:22.156 Test: blockdev write read block ...passed 00:15:22.156 Test: blockdev write zeroes read block ...passed 00:15:22.156 Test: blockdev write zeroes read no split ...passed 00:15:22.156 Test: blockdev write zeroes read split ...passed 00:15:22.156 Test: blockdev write zeroes read split partial ...passed 00:15:22.156 Test: blockdev reset ...passed 00:15:22.156 Test: blockdev write read 8 blocks ...passed 00:15:22.156 Test: blockdev write read size > 128k ...passed 00:15:22.156 Test: blockdev write read invalid size ...passed 00:15:22.156 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.156 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.156 Test: blockdev write read max offset ...passed 00:15:22.156 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.156 Test: blockdev writev readv 8 blocks ...passed 00:15:22.156 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.156 Test: blockdev writev readv block ...passed 00:15:22.156 Test: blockdev writev readv size > 128k ...passed 00:15:22.156 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.156 Test: blockdev comparev and writev ...passed 00:15:22.156 Test: blockdev nvme passthru rw ...passed 00:15:22.156 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.156 Test: blockdev nvme admin passthru ...passed 00:15:22.156 Test: blockdev copy ...passed 00:15:22.156 Suite: bdevio tests on: nvme0n2 00:15:22.156 Test: blockdev write read block ...passed 00:15:22.156 Test: blockdev write zeroes read block ...passed 00:15:22.156 Test: blockdev write zeroes read no split ...passed 00:15:22.417 Test: blockdev write zeroes read split ...passed 00:15:22.417 Test: blockdev write zeroes read split partial ...passed 00:15:22.417 Test: blockdev reset ...passed 00:15:22.417 Test: blockdev write read 8 blocks ...passed 00:15:22.417 Test: blockdev write read size > 128k ...passed 00:15:22.417 Test: blockdev write read invalid size ...passed 00:15:22.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.417 Test: blockdev write read max offset ...passed 00:15:22.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.418 Test: blockdev writev readv 8 blocks ...passed 00:15:22.418 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.418 Test: blockdev writev readv block ...passed 00:15:22.418 Test: blockdev writev readv size > 128k ...passed 00:15:22.418 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.418 Test: blockdev comparev and writev ...passed 00:15:22.418 Test: blockdev nvme passthru rw ...passed 00:15:22.418 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.418 Test: blockdev nvme admin passthru ...passed 00:15:22.418 Test: blockdev copy ...passed 00:15:22.418 Suite: bdevio tests on: nvme0n1 00:15:22.418 Test: blockdev write read block ...passed 00:15:22.418 Test: blockdev write zeroes read block ...passed 00:15:22.418 Test: blockdev write zeroes read no split ...passed 00:15:22.418 Test: blockdev write zeroes read split ...passed 00:15:22.418 Test: blockdev write zeroes read split partial ...passed 00:15:22.418 Test: blockdev reset ...passed 00:15:22.418 Test: blockdev write read 8 blocks ...passed 00:15:22.418 Test: blockdev write read size > 128k ...passed 00:15:22.418 Test: blockdev write read invalid size ...passed 00:15:22.418 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.418 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.418 Test: blockdev write read max offset ...passed 00:15:22.418 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.418 Test: blockdev writev readv 8 blocks ...passed 00:15:22.418 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.418 Test: blockdev writev readv block ...passed 00:15:22.418 Test: blockdev writev readv size > 128k ...passed 00:15:22.418 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.418 Test: blockdev comparev and writev ...passed 00:15:22.418 Test: blockdev nvme passthru rw ...passed 00:15:22.418 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.418 Test: blockdev nvme admin passthru ...passed 00:15:22.418 Test: blockdev copy ...passed 00:15:22.418 00:15:22.418 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.418 suites 6 6 n/a 0 0 00:15:22.418 tests 138 138 138 0 0 00:15:22.418 asserts 780 780 780 0 n/a 00:15:22.418 00:15:22.418 Elapsed time = 1.285 seconds 00:15:22.418 0 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72245 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72245 ']' 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72245 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72245 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72245' 00:15:22.418 killing process with pid 72245 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72245 00:15:22.418 09:47:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72245 00:15:23.361 09:47:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:23.361 00:15:23.361 real 0m2.462s 00:15:23.361 user 0m5.878s 00:15:23.361 sys 0m0.400s 00:15:23.361 09:47:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.361 ************************************ 00:15:23.361 09:47:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:23.361 END TEST bdev_bounds 00:15:23.361 ************************************ 00:15:23.361 09:47:10 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:23.361 09:47:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:23.361 09:47:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.361 09:47:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.361 ************************************ 00:15:23.361 START TEST bdev_nbd 00:15:23.361 ************************************ 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:23.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72310 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72310 /var/tmp/spdk-nbd.sock 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72310 ']' 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.361 09:47:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:23.622 [2024-12-05 09:47:11.023384] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:23.622 [2024-12-05 09:47:11.023753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.622 [2024-12-05 09:47:11.187947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.884 [2024-12-05 09:47:11.332666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.455 09:47:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.456 09:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.716 1+0 records in 00:15:24.716 1+0 records out 00:15:24.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105338 s, 3.9 MB/s 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.716 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.976 1+0 records in 00:15:24.976 1+0 records out 00:15:24.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110885 s, 3.7 MB/s 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.976 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.237 1+0 records in 00:15:25.237 1+0 records out 00:15:25.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939063 s, 4.4 MB/s 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.237 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.238 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.238 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.238 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.238 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.500 1+0 records in 00:15:25.500 1+0 records out 00:15:25.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129194 s, 3.2 MB/s 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.500 09:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.761 1+0 records in 00:15:25.761 1+0 records out 00:15:25.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122741 s, 3.3 MB/s 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.761 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.022 1+0 records in 00:15:26.022 1+0 records out 00:15:26.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127201 s, 3.2 MB/s 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd0", 00:15:26.022 "bdev_name": "nvme0n1" 00:15:26.022 }, 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd1", 00:15:26.022 "bdev_name": "nvme0n2" 00:15:26.022 }, 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd2", 00:15:26.022 "bdev_name": "nvme0n3" 00:15:26.022 }, 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd3", 00:15:26.022 "bdev_name": "nvme1n1" 00:15:26.022 }, 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd4", 00:15:26.022 "bdev_name": "nvme2n1" 00:15:26.022 }, 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd5", 00:15:26.022 "bdev_name": "nvme3n1" 00:15:26.022 } 00:15:26.022 ]' 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:26.022 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd0", 00:15:26.022 "bdev_name": "nvme0n1" 00:15:26.022 }, 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd1", 00:15:26.022 "bdev_name": "nvme0n2" 00:15:26.022 }, 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd2", 00:15:26.022 "bdev_name": "nvme0n3" 00:15:26.022 }, 00:15:26.022 { 00:15:26.022 "nbd_device": "/dev/nbd3", 00:15:26.023 "bdev_name": "nvme1n1" 00:15:26.023 }, 00:15:26.023 { 00:15:26.023 "nbd_device": "/dev/nbd4", 00:15:26.023 "bdev_name": "nvme2n1" 00:15:26.023 }, 00:15:26.023 { 00:15:26.023 "nbd_device": "/dev/nbd5", 00:15:26.023 "bdev_name": "nvme3n1" 00:15:26.023 } 00:15:26.023 ]' 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.283 09:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.543 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.804 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.065 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.325 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:27.584 09:47:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.584 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:27.844 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:28.104 /dev/nbd0 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.104 1+0 records in 00:15:28.104 1+0 records out 00:15:28.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129138 s, 3.2 MB/s 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.104 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:28.364 /dev/nbd1 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.364 1+0 records in 00:15:28.364 1+0 records out 00:15:28.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104782 s, 3.9 MB/s 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.364 09:47:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:28.623 /dev/nbd10 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.623 1+0 records in 00:15:28.623 1+0 records out 00:15:28.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120556 s, 3.4 MB/s 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.623 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:28.623 /dev/nbd11 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.882 1+0 records in 00:15:28.882 1+0 records out 00:15:28.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00171171 s, 2.4 MB/s 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.882 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:28.882 /dev/nbd12 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.142 1+0 records in 00:15:29.142 1+0 records out 00:15:29.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00353522 s, 1.2 MB/s 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:29.142 /dev/nbd13 00:15:29.142 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.401 1+0 records in 00:15:29.401 1+0 records out 00:15:29.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123263 s, 3.3 MB/s 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:29.401 09:47:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:29.401 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd0", 00:15:29.401 "bdev_name": "nvme0n1" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd1", 00:15:29.401 "bdev_name": "nvme0n2" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd10", 00:15:29.401 "bdev_name": "nvme0n3" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd11", 00:15:29.401 "bdev_name": "nvme1n1" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd12", 00:15:29.401 "bdev_name": "nvme2n1" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd13", 00:15:29.401 "bdev_name": "nvme3n1" 00:15:29.401 } 00:15:29.401 ]' 00:15:29.401 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd0", 00:15:29.401 "bdev_name": "nvme0n1" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd1", 00:15:29.401 "bdev_name": "nvme0n2" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd10", 00:15:29.401 "bdev_name": "nvme0n3" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd11", 00:15:29.401 "bdev_name": "nvme1n1" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd12", 00:15:29.401 "bdev_name": "nvme2n1" 00:15:29.401 }, 00:15:29.401 { 00:15:29.401 "nbd_device": "/dev/nbd13", 00:15:29.401 "bdev_name": "nvme3n1" 00:15:29.401 } 00:15:29.401 ]' 00:15:29.401 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:29.661 /dev/nbd1 00:15:29.661 /dev/nbd10 00:15:29.661 /dev/nbd11 00:15:29.661 /dev/nbd12 00:15:29.661 /dev/nbd13' 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:29.661 /dev/nbd1 00:15:29.661 /dev/nbd10 00:15:29.661 /dev/nbd11 00:15:29.661 /dev/nbd12 00:15:29.661 /dev/nbd13' 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:29.661 256+0 records in 00:15:29.661 256+0 records out 00:15:29.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646054 s, 162 MB/s 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.661 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:29.921 256+0 records in 00:15:29.921 256+0 records out 00:15:29.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238588 s, 4.4 MB/s 00:15:29.921 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.921 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:29.921 256+0 records in 00:15:29.921 256+0 records out 00:15:29.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195631 s, 5.4 MB/s 00:15:29.921 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.921 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:30.180 256+0 records in 00:15:30.180 256+0 records out 00:15:30.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241404 s, 4.3 MB/s 00:15:30.180 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.180 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:30.441 256+0 records in 00:15:30.441 256+0 records out 00:15:30.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245631 s, 4.3 MB/s 00:15:30.441 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.441 09:47:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:30.702 256+0 records in 00:15:30.702 256+0 records out 00:15:30.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.309181 s, 3.4 MB/s 00:15:30.702 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.702 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:30.962 256+0 records in 00:15:30.962 256+0 records out 00:15:30.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198506 s, 5.3 MB/s 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.962 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.222 09:47:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.482 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:31.742 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:31.742 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:31.743 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:31.743 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.743 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.743 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:31.743 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.743 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.743 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.743 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.002 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.263 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.264 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:32.264 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.264 09:47:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:32.525 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:32.786 malloc_lvol_verify 00:15:32.786 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:33.047 6f2bb552-9e4d-40df-8fc4-ad3d8bf81143 00:15:33.047 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:33.308 a890d6fb-7d0f-4755-9a18-d79e793d52c1 00:15:33.308 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:33.308 /dev/nbd0 00:15:33.308 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:33.308 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:33.308 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:33.308 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:33.308 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:33.308 mke2fs 1.47.0 (5-Feb-2023) 00:15:33.308 Discarding device blocks: 0/4096 done 00:15:33.308 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:33.308 00:15:33.308 Allocating group tables: 0/1 done 00:15:33.308 Writing inode tables: 0/1 done 00:15:33.569 Creating journal (1024 blocks): done 00:15:33.569 Writing superblocks and filesystem accounting information: 0/1 done 00:15:33.569 00:15:33.569 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:33.569 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.569 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:33.569 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.569 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:33.569 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.569 09:47:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72310 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72310 ']' 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72310 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72310 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.569 killing process with pid 72310 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72310' 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72310 00:15:33.569 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72310 00:15:34.141 09:47:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:34.141 00:15:34.141 real 0m10.826s 00:15:34.141 user 0m14.493s 00:15:34.141 sys 0m3.837s 00:15:34.141 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.141 ************************************ 00:15:34.141 END TEST bdev_nbd 00:15:34.141 ************************************ 00:15:34.141 09:47:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:34.403 09:47:21 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:34.403 09:47:21 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:34.403 09:47:21 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:34.403 09:47:21 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:34.403 09:47:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:34.403 09:47:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.403 09:47:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.403 ************************************ 00:15:34.403 START TEST bdev_fio 00:15:34.403 ************************************ 00:15:34.403 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:34.403 ************************************ 00:15:34.403 START TEST bdev_fio_rw_verify 00:15:34.403 ************************************ 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:34.403 09:47:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:34.664 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.664 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.664 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.664 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.664 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.664 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.664 fio-3.35 00:15:34.664 Starting 6 threads 00:15:46.903 00:15:46.903 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72713: Thu Dec 5 09:47:32 2024 00:15:46.903 read: IOPS=16.2k, BW=63.3MiB/s (66.4MB/s)(633MiB/10002msec) 00:15:46.903 slat (usec): min=2, max=2446, avg= 6.79, stdev=17.45 00:15:46.903 clat (usec): min=88, max=12386, avg=1158.46, stdev=753.50 00:15:46.903 lat (usec): min=92, max=12401, avg=1165.25, stdev=754.14 00:15:46.903 clat percentiles (usec): 00:15:46.903 | 50.000th=[ 1045], 99.000th=[ 3523], 99.900th=[ 4621], 99.990th=[ 5407], 00:15:46.903 | 99.999th=[12387] 00:15:46.903 write: IOPS=16.6k, BW=64.7MiB/s (67.8MB/s)(647MiB/10002msec); 0 zone resets 00:15:46.903 slat (usec): min=10, max=4305, avg=40.25, stdev=132.74 00:15:46.903 clat (usec): min=80, max=12650, avg=1437.54, stdev=828.86 00:15:46.903 lat (usec): min=97, max=12721, avg=1477.79, stdev=841.85 00:15:46.903 clat percentiles (usec): 00:15:46.903 | 50.000th=[ 1319], 99.000th=[ 3982], 99.900th=[ 5604], 99.990th=[ 8356], 00:15:46.903 | 99.999th=[12649] 00:15:46.903 bw ( KiB/s): min=49028, max=94323, per=100.00%, avg=67077.63, stdev=2308.52, samples=114 00:15:46.903 iops : min=12254, max=23578, avg=16768.42, stdev=577.10, samples=114 00:15:46.903 lat (usec) : 100=0.01%, 250=4.57%, 500=10.99%, 750=12.07%, 1000=12.76% 00:15:46.903 lat (msec) : 2=42.75%, 4=16.17%, 10=0.67%, 20=0.01% 00:15:46.903 cpu : usr=42.30%, sys=32.99%, ctx=5936, majf=0, minf=15935 00:15:46.903 IO depths : 1=11.4%, 2=23.8%, 4=51.2%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.903 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.903 issued rwts: total=162150,165630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.903 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:46.903 00:15:46.903 Run status group 0 (all jobs): 00:15:46.903 READ: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=633MiB (664MB), run=10002-10002msec 00:15:46.903 WRITE: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=647MiB (678MB), run=10002-10002msec 00:15:46.903 ----------------------------------------------------- 00:15:46.903 Suppressions used: 00:15:46.903 count bytes template 00:15:46.903 6 48 /usr/src/fio/parse.c 00:15:46.903 3375 324000 /usr/src/fio/iolog.c 00:15:46.903 1 8 libtcmalloc_minimal.so 00:15:46.903 1 904 libcrypto.so 00:15:46.903 ----------------------------------------------------- 00:15:46.903 00:15:46.903 00:15:46.903 real 0m11.968s 00:15:46.903 user 0m26.903s 00:15:46.903 sys 0m20.127s 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.903 ************************************ 00:15:46.903 END TEST bdev_fio_rw_verify 00:15:46.903 ************************************ 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:46.903 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0b2e1ce4-f3fc-4889-9c9b-47cc172e1fb6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0b2e1ce4-f3fc-4889-9c9b-47cc172e1fb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7006c8ea-6319-40fc-8746-0bf8218c93b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7006c8ea-6319-40fc-8746-0bf8218c93b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5b8fbae9-c6e1-4400-bcb0-28fd07586a46"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b8fbae9-c6e1-4400-bcb0-28fd07586a46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1ad734fd-d162-47fc-b0d6-6dd0224ecfb6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1ad734fd-d162-47fc-b0d6-6dd0224ecfb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "185ba7b7-3f74-400d-b8a6-06120a3d0f50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "185ba7b7-3f74-400d-b8a6-06120a3d0f50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6911c971-f63c-409e-a4c8-24872b2dcf8c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6911c971-f63c-409e-a4c8-24872b2dcf8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.904 /home/vagrant/spdk_repo/spdk 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:46.904 00:15:46.904 real 0m12.153s 00:15:46.904 user 0m26.988s 00:15:46.904 sys 0m20.206s 00:15:46.904 ************************************ 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.904 09:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:46.904 END TEST bdev_fio 00:15:46.904 ************************************ 00:15:46.904 09:47:34 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:46.904 09:47:34 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:46.904 09:47:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:46.904 09:47:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.904 09:47:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.904 ************************************ 00:15:46.904 START TEST bdev_verify 00:15:46.904 ************************************ 00:15:46.904 09:47:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:46.904 [2024-12-05 09:47:34.124829] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:46.904 [2024-12-05 09:47:34.125002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72888 ] 00:15:46.904 [2024-12-05 09:47:34.292268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:46.904 [2024-12-05 09:47:34.419315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.904 [2024-12-05 09:47:34.419438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.502 Running I/O for 5 seconds... 00:15:49.382 24448.00 IOPS, 95.50 MiB/s [2024-12-05T09:47:38.416Z] 24400.00 IOPS, 95.31 MiB/s [2024-12-05T09:47:39.412Z] 24234.67 IOPS, 94.67 MiB/s [2024-12-05T09:47:39.986Z] 24520.00 IOPS, 95.78 MiB/s [2024-12-05T09:47:39.986Z] 24435.20 IOPS, 95.45 MiB/s 00:15:52.357 Latency(us) 00:15:52.357 [2024-12-05T09:47:39.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.357 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x0 length 0x80000 00:15:52.357 nvme0n1 : 5.07 1792.15 7.00 0.00 0.00 71292.34 9527.93 77030.01 00:15:52.357 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x80000 length 0x80000 00:15:52.357 nvme0n1 : 5.04 1980.01 7.73 0.00 0.00 64537.45 7007.31 64527.75 00:15:52.357 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x0 length 0x80000 00:15:52.357 nvme0n2 : 5.07 1791.61 7.00 0.00 0.00 71161.88 11695.66 75416.81 00:15:52.357 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x80000 length 0x80000 00:15:52.357 nvme0n2 : 5.05 1976.09 7.72 0.00 0.00 64570.50 8217.21 65334.35 00:15:52.357 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x0 length 0x80000 00:15:52.357 nvme0n3 : 5.08 1790.02 6.99 0.00 0.00 71064.54 8570.09 61704.66 00:15:52.357 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x80000 length 0x80000 00:15:52.357 nvme0n3 : 5.05 1975.43 7.72 0.00 0.00 64497.25 9326.28 58478.28 00:15:52.357 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x0 length 0x20000 00:15:52.357 nvme1n1 : 5.08 1788.95 6.99 0.00 0.00 70971.11 10889.06 68157.44 00:15:52.357 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x20000 length 0x20000 00:15:52.357 nvme1n1 : 5.05 1977.47 7.72 0.00 0.00 64331.23 11746.07 61704.66 00:15:52.357 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x0 length 0xbd0bd 00:15:52.357 nvme2n1 : 5.09 2585.85 10.10 0.00 0.00 48931.59 5419.32 53235.40 00:15:52.357 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:52.357 nvme2n1 : 5.06 2721.71 10.63 0.00 0.00 46622.67 3528.86 51420.55 00:15:52.357 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0x0 length 0xa0000 00:15:52.357 nvme3n1 : 5.08 1838.75 7.18 0.00 0.00 68572.88 4108.60 75416.81 00:15:52.357 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.357 Verification LBA range: start 0xa0000 length 0xa0000 00:15:52.357 nvme3n1 : 5.06 2022.56 7.90 0.00 0.00 62604.48 6200.71 64124.46 00:15:52.357 [2024-12-05T09:47:39.986Z] =================================================================================================================== 00:15:52.357 [2024-12-05T09:47:39.986Z] Total : 24240.61 94.69 0.00 0.00 62936.31 3528.86 77030.01 00:15:53.301 00:15:53.301 real 0m6.771s 00:15:53.301 user 0m10.662s 00:15:53.301 sys 0m1.669s 00:15:53.301 09:47:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.301 ************************************ 00:15:53.301 END TEST bdev_verify 00:15:53.301 ************************************ 00:15:53.301 09:47:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:53.301 09:47:40 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:53.301 09:47:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:53.301 09:47:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.301 09:47:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 ************************************ 00:15:53.302 START TEST bdev_verify_big_io 00:15:53.302 ************************************ 00:15:53.302 09:47:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:53.563 [2024-12-05 09:47:40.963163] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:53.563 [2024-12-05 09:47:40.963340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72989 ] 00:15:53.563 [2024-12-05 09:47:41.128298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.823 [2024-12-05 09:47:41.249202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.823 [2024-12-05 09:47:41.249325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.396 Running I/O for 5 seconds... 00:15:59.082 1472.00 IOPS, 92.00 MiB/s [2024-12-05T09:47:48.099Z] 2510.00 IOPS, 156.88 MiB/s [2024-12-05T09:47:48.099Z] 2846.67 IOPS, 177.92 MiB/s 00:16:00.470 Latency(us) 00:16:00.470 [2024-12-05T09:47:48.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.470 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x0 length 0x8000 00:16:00.470 nvme0n1 : 5.98 114.98 7.19 0.00 0.00 1085061.61 70173.93 1129235.69 00:16:00.470 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x8000 length 0x8000 00:16:00.470 nvme0n1 : 5.79 138.06 8.63 0.00 0.00 898942.94 102034.51 993727.41 00:16:00.470 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x0 length 0x8000 00:16:00.470 nvme0n2 : 5.95 91.04 5.69 0.00 0.00 1289631.66 81466.29 2090699.22 00:16:00.470 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x8000 length 0x8000 00:16:00.470 nvme0n2 : 5.64 140.31 8.77 0.00 0.00 849657.17 5595.77 822728.86 00:16:00.470 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x0 length 0x8000 00:16:00.470 nvme0n3 : 5.99 130.96 8.19 0.00 0.00 892022.12 70173.93 1277649.53 00:16:00.470 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x8000 length 0x8000 00:16:00.470 nvme0n3 : 5.93 105.27 6.58 0.00 0.00 1088231.74 20366.57 2413337.99 00:16:00.470 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x0 length 0x2000 00:16:00.470 nvme1n1 : 5.99 114.46 7.15 0.00 0.00 980702.81 35490.26 2155226.98 00:16:00.470 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x2000 length 0x2000 00:16:00.470 nvme1n1 : 5.94 129.37 8.09 0.00 0.00 860298.63 145994.04 1077613.49 00:16:00.470 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x0 length 0xbd0b 00:16:00.470 nvme2n1 : 5.99 154.97 9.69 0.00 0.00 698764.92 9527.93 890483.00 00:16:00.470 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:00.470 nvme2n1 : 5.95 185.42 11.59 0.00 0.00 599979.52 2029.10 935652.43 00:16:00.470 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0x0 length 0xa000 00:16:00.470 nvme3n1 : 6.00 138.64 8.67 0.00 0.00 761457.70 3554.07 2116510.33 00:16:00.470 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.470 Verification LBA range: start 0xa000 length 0xa000 00:16:00.470 nvme3n1 : 5.95 161.39 10.09 0.00 0.00 667220.77 4083.40 1593835.52 00:16:00.470 [2024-12-05T09:47:48.099Z] =================================================================================================================== 00:16:00.470 [2024-12-05T09:47:48.099Z] Total : 1604.89 100.31 0.00 0.00 855527.45 2029.10 2413337.99 00:16:01.414 00:16:01.414 real 0m7.897s 00:16:01.414 user 0m14.383s 00:16:01.414 sys 0m0.514s 00:16:01.415 09:47:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.415 ************************************ 00:16:01.415 END TEST bdev_verify_big_io 00:16:01.415 ************************************ 00:16:01.415 09:47:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.415 09:47:48 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.415 09:47:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:01.415 09:47:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.415 09:47:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.415 ************************************ 00:16:01.415 START TEST bdev_write_zeroes 00:16:01.415 ************************************ 00:16:01.415 09:47:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.415 [2024-12-05 09:47:48.923956] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:01.415 [2024-12-05 09:47:48.924103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73100 ] 00:16:01.674 [2024-12-05 09:47:49.090963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.674 [2024-12-05 09:47:49.208857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.242 Running I/O for 1 seconds... 00:16:03.180 72736.00 IOPS, 284.12 MiB/s 00:16:03.180 Latency(us) 00:16:03.180 [2024-12-05T09:47:50.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.180 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.180 nvme0n1 : 1.02 11808.70 46.13 0.00 0.00 10828.90 5041.23 22887.19 00:16:03.180 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.180 nvme0n2 : 1.02 11664.26 45.56 0.00 0.00 10953.69 5595.77 22685.54 00:16:03.180 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.180 nvme0n3 : 1.01 11780.45 46.02 0.00 0.00 10839.19 5595.77 22080.59 00:16:03.180 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.180 nvme1n1 : 1.01 11748.72 45.89 0.00 0.00 10860.87 5696.59 21576.47 00:16:03.180 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.180 nvme2n1 : 1.02 13560.26 52.97 0.00 0.00 9401.47 4032.98 17644.31 00:16:03.180 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.180 nvme3n1 : 1.02 11696.64 45.69 0.00 0.00 10888.98 4159.02 23794.61 00:16:03.180 [2024-12-05T09:47:50.809Z] =================================================================================================================== 00:16:03.180 [2024-12-05T09:47:50.809Z] Total : 72259.03 282.26 0.00 0.00 10596.60 4032.98 23794.61 00:16:04.116 00:16:04.116 real 0m2.561s 00:16:04.116 user 0m1.874s 00:16:04.116 sys 0m0.490s 00:16:04.116 ************************************ 00:16:04.116 END TEST bdev_write_zeroes 00:16:04.116 ************************************ 00:16:04.116 09:47:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.116 09:47:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:04.116 09:47:51 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.116 09:47:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:04.116 09:47:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.116 09:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.116 ************************************ 00:16:04.116 START TEST bdev_json_nonenclosed 00:16:04.116 ************************************ 00:16:04.116 09:47:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.116 [2024-12-05 09:47:51.541199] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:04.116 [2024-12-05 09:47:51.541330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73144 ] 00:16:04.116 [2024-12-05 09:47:51.705877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.377 [2024-12-05 09:47:51.803381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.377 [2024-12-05 09:47:51.803461] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:04.377 [2024-12-05 09:47:51.803478] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:04.377 [2024-12-05 09:47:51.803487] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:04.377 00:16:04.377 real 0m0.520s 00:16:04.377 user 0m0.310s 00:16:04.377 sys 0m0.106s 00:16:04.377 09:47:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.377 ************************************ 00:16:04.377 END TEST bdev_json_nonenclosed 00:16:04.377 ************************************ 00:16:04.377 09:47:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:04.638 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.638 09:47:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:04.638 09:47:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.638 09:47:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.638 ************************************ 00:16:04.638 START TEST bdev_json_nonarray 00:16:04.638 ************************************ 00:16:04.638 09:47:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.638 [2024-12-05 09:47:52.141245] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:04.638 [2024-12-05 09:47:52.141411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73175 ] 00:16:04.896 [2024-12-05 09:47:52.308085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.896 [2024-12-05 09:47:52.419586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.896 [2024-12-05 09:47:52.419708] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:04.896 [2024-12-05 09:47:52.419729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:04.896 [2024-12-05 09:47:52.419740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:05.157 00:16:05.157 real 0m0.547s 00:16:05.157 user 0m0.324s 00:16:05.157 sys 0m0.118s 00:16:05.157 09:47:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.157 09:47:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:05.157 ************************************ 00:16:05.157 END TEST bdev_json_nonarray 00:16:05.157 ************************************ 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:05.157 09:47:52 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:05.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:18.089 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:18.089 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:18.089 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:18.089 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:18.089 00:16:18.089 real 1m3.168s 00:16:18.089 user 1m21.801s 00:16:18.089 sys 0m45.634s 00:16:18.089 ************************************ 00:16:18.089 09:48:05 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.089 09:48:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:18.089 END TEST blockdev_xnvme 00:16:18.089 ************************************ 00:16:18.350 09:48:05 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:18.350 09:48:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:18.350 09:48:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.350 09:48:05 -- common/autotest_common.sh@10 -- # set +x 00:16:18.350 ************************************ 00:16:18.350 START TEST ublk 00:16:18.350 ************************************ 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:18.350 * Looking for test storage... 00:16:18.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:18.350 09:48:05 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.350 09:48:05 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.350 09:48:05 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.350 09:48:05 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.350 09:48:05 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.350 09:48:05 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.350 09:48:05 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.350 09:48:05 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.350 09:48:05 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.350 09:48:05 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.350 09:48:05 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.350 09:48:05 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:18.350 09:48:05 ublk -- scripts/common.sh@345 -- # : 1 00:16:18.350 09:48:05 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.350 09:48:05 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.350 09:48:05 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:18.350 09:48:05 ublk -- scripts/common.sh@353 -- # local d=1 00:16:18.350 09:48:05 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.350 09:48:05 ublk -- scripts/common.sh@355 -- # echo 1 00:16:18.350 09:48:05 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.350 09:48:05 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:18.350 09:48:05 ublk -- scripts/common.sh@353 -- # local d=2 00:16:18.350 09:48:05 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.350 09:48:05 ublk -- scripts/common.sh@355 -- # echo 2 00:16:18.350 09:48:05 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.350 09:48:05 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.350 09:48:05 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.350 09:48:05 ublk -- scripts/common.sh@368 -- # return 0 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:18.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.350 --rc genhtml_branch_coverage=1 00:16:18.350 --rc genhtml_function_coverage=1 00:16:18.350 --rc genhtml_legend=1 00:16:18.350 --rc geninfo_all_blocks=1 00:16:18.350 --rc geninfo_unexecuted_blocks=1 00:16:18.350 00:16:18.350 ' 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:18.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.350 --rc genhtml_branch_coverage=1 00:16:18.350 --rc genhtml_function_coverage=1 00:16:18.350 --rc genhtml_legend=1 00:16:18.350 --rc geninfo_all_blocks=1 00:16:18.350 --rc geninfo_unexecuted_blocks=1 00:16:18.350 00:16:18.350 ' 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:18.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.350 --rc genhtml_branch_coverage=1 00:16:18.350 --rc genhtml_function_coverage=1 00:16:18.350 --rc genhtml_legend=1 00:16:18.350 --rc geninfo_all_blocks=1 00:16:18.350 --rc geninfo_unexecuted_blocks=1 00:16:18.350 00:16:18.350 ' 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:18.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.350 --rc genhtml_branch_coverage=1 00:16:18.350 --rc genhtml_function_coverage=1 00:16:18.350 --rc genhtml_legend=1 00:16:18.350 --rc geninfo_all_blocks=1 00:16:18.350 --rc geninfo_unexecuted_blocks=1 00:16:18.350 00:16:18.350 ' 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:18.350 09:48:05 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:18.350 09:48:05 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:18.350 09:48:05 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:18.350 09:48:05 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:18.350 09:48:05 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:18.350 09:48:05 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:18.350 09:48:05 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:18.350 09:48:05 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:18.350 09:48:05 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.350 09:48:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.350 ************************************ 00:16:18.350 START TEST test_save_ublk_config 00:16:18.350 ************************************ 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73465 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73465 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73465 ']' 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.350 09:48:05 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.351 09:48:05 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.351 09:48:05 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.351 09:48:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.611 [2024-12-05 09:48:06.049011] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:18.611 [2024-12-05 09:48:06.049155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73465 ] 00:16:18.611 [2024-12-05 09:48:06.214587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.871 [2024-12-05 09:48:06.330651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.442 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.442 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:19.442 09:48:07 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:19.442 09:48:07 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:19.442 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.442 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:19.703 [2024-12-05 09:48:07.073541] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:19.703 [2024-12-05 09:48:07.074422] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:19.703 malloc0 00:16:19.703 [2024-12-05 09:48:07.145703] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:19.703 [2024-12-05 09:48:07.145800] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:19.703 [2024-12-05 09:48:07.145811] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:19.703 [2024-12-05 09:48:07.145819] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:19.703 [2024-12-05 09:48:07.153565] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:19.703 [2024-12-05 09:48:07.153595] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:19.703 [2024-12-05 09:48:07.161554] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:19.703 [2024-12-05 09:48:07.161698] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:19.703 [2024-12-05 09:48:07.178546] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:19.703 0 00:16:19.703 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.703 09:48:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:19.703 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.703 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:19.965 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.965 09:48:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:19.965 "subsystems": [ 00:16:19.965 { 00:16:19.965 "subsystem": "fsdev", 00:16:19.965 "config": [ 00:16:19.965 { 00:16:19.965 "method": "fsdev_set_opts", 00:16:19.965 "params": { 00:16:19.965 "fsdev_io_pool_size": 65535, 00:16:19.965 "fsdev_io_cache_size": 256 00:16:19.965 } 00:16:19.965 } 00:16:19.965 ] 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "subsystem": "keyring", 00:16:19.965 "config": [] 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "subsystem": "iobuf", 00:16:19.965 "config": [ 00:16:19.965 { 00:16:19.965 "method": "iobuf_set_options", 00:16:19.965 "params": { 00:16:19.965 "small_pool_count": 8192, 00:16:19.965 "large_pool_count": 1024, 00:16:19.965 "small_bufsize": 8192, 00:16:19.965 "large_bufsize": 135168, 00:16:19.965 "enable_numa": false 00:16:19.965 } 00:16:19.965 } 00:16:19.965 ] 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "subsystem": "sock", 00:16:19.965 "config": [ 00:16:19.965 { 00:16:19.965 "method": "sock_set_default_impl", 00:16:19.965 "params": { 00:16:19.965 "impl_name": "posix" 00:16:19.965 } 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "method": "sock_impl_set_options", 00:16:19.965 "params": { 00:16:19.965 "impl_name": "ssl", 00:16:19.965 "recv_buf_size": 4096, 00:16:19.965 "send_buf_size": 4096, 00:16:19.965 "enable_recv_pipe": true, 00:16:19.965 "enable_quickack": false, 00:16:19.965 "enable_placement_id": 0, 00:16:19.965 "enable_zerocopy_send_server": true, 00:16:19.965 "enable_zerocopy_send_client": false, 00:16:19.965 "zerocopy_threshold": 0, 00:16:19.965 "tls_version": 0, 00:16:19.965 "enable_ktls": false 00:16:19.965 } 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "method": "sock_impl_set_options", 00:16:19.965 "params": { 00:16:19.965 "impl_name": "posix", 00:16:19.965 "recv_buf_size": 2097152, 00:16:19.965 "send_buf_size": 2097152, 00:16:19.965 "enable_recv_pipe": true, 00:16:19.965 "enable_quickack": false, 00:16:19.965 "enable_placement_id": 0, 00:16:19.965 "enable_zerocopy_send_server": true, 00:16:19.965 "enable_zerocopy_send_client": false, 00:16:19.965 "zerocopy_threshold": 0, 00:16:19.965 "tls_version": 0, 00:16:19.965 "enable_ktls": false 00:16:19.965 } 00:16:19.965 } 00:16:19.965 ] 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "subsystem": "vmd", 00:16:19.965 "config": [] 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "subsystem": "accel", 00:16:19.965 "config": [ 00:16:19.965 { 00:16:19.965 "method": "accel_set_options", 00:16:19.965 "params": { 00:16:19.965 "small_cache_size": 128, 00:16:19.965 "large_cache_size": 16, 00:16:19.965 "task_count": 2048, 00:16:19.965 "sequence_count": 2048, 00:16:19.965 "buf_count": 2048 00:16:19.965 } 00:16:19.965 } 00:16:19.965 ] 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "subsystem": "bdev", 00:16:19.965 "config": [ 00:16:19.965 { 00:16:19.965 "method": "bdev_set_options", 00:16:19.965 "params": { 00:16:19.965 "bdev_io_pool_size": 65535, 00:16:19.965 "bdev_io_cache_size": 256, 00:16:19.965 "bdev_auto_examine": true, 00:16:19.965 "iobuf_small_cache_size": 128, 00:16:19.965 "iobuf_large_cache_size": 16 00:16:19.965 } 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "method": "bdev_raid_set_options", 00:16:19.965 "params": { 00:16:19.965 "process_window_size_kb": 1024, 00:16:19.965 "process_max_bandwidth_mb_sec": 0 00:16:19.965 } 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "method": "bdev_iscsi_set_options", 00:16:19.965 "params": { 00:16:19.965 "timeout_sec": 30 00:16:19.965 } 00:16:19.965 }, 00:16:19.965 { 00:16:19.965 "method": "bdev_nvme_set_options", 00:16:19.965 "params": { 00:16:19.965 "action_on_timeout": "none", 00:16:19.965 "timeout_us": 0, 00:16:19.965 "timeout_admin_us": 0, 00:16:19.965 "keep_alive_timeout_ms": 10000, 00:16:19.965 "arbitration_burst": 0, 00:16:19.965 "low_priority_weight": 0, 00:16:19.965 "medium_priority_weight": 0, 00:16:19.965 "high_priority_weight": 0, 00:16:19.965 "nvme_adminq_poll_period_us": 10000, 00:16:19.965 "nvme_ioq_poll_period_us": 0, 00:16:19.965 "io_queue_requests": 0, 00:16:19.965 "delay_cmd_submit": true, 00:16:19.965 "transport_retry_count": 4, 00:16:19.965 "bdev_retry_count": 3, 00:16:19.965 "transport_ack_timeout": 0, 00:16:19.965 "ctrlr_loss_timeout_sec": 0, 00:16:19.965 "reconnect_delay_sec": 0, 00:16:19.965 "fast_io_fail_timeout_sec": 0, 00:16:19.965 "disable_auto_failback": false, 00:16:19.965 "generate_uuids": false, 00:16:19.965 "transport_tos": 0, 00:16:19.965 "nvme_error_stat": false, 00:16:19.966 "rdma_srq_size": 0, 00:16:19.966 "io_path_stat": false, 00:16:19.966 "allow_accel_sequence": false, 00:16:19.966 "rdma_max_cq_size": 0, 00:16:19.966 "rdma_cm_event_timeout_ms": 0, 00:16:19.966 "dhchap_digests": [ 00:16:19.966 "sha256", 00:16:19.966 "sha384", 00:16:19.966 "sha512" 00:16:19.966 ], 00:16:19.966 "dhchap_dhgroups": [ 00:16:19.966 "null", 00:16:19.966 "ffdhe2048", 00:16:19.966 "ffdhe3072", 00:16:19.966 "ffdhe4096", 00:16:19.966 "ffdhe6144", 00:16:19.966 "ffdhe8192" 00:16:19.966 ] 00:16:19.966 } 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "method": "bdev_nvme_set_hotplug", 00:16:19.966 "params": { 00:16:19.966 "period_us": 100000, 00:16:19.966 "enable": false 00:16:19.966 } 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "method": "bdev_malloc_create", 00:16:19.966 "params": { 00:16:19.966 "name": "malloc0", 00:16:19.966 "num_blocks": 8192, 00:16:19.966 "block_size": 4096, 00:16:19.966 "physical_block_size": 4096, 00:16:19.966 "uuid": "6a371f7f-bc3b-4f6b-8ffb-cd44cc206620", 00:16:19.966 "optimal_io_boundary": 0, 00:16:19.966 "md_size": 0, 00:16:19.966 "dif_type": 0, 00:16:19.966 "dif_is_head_of_md": false, 00:16:19.966 "dif_pi_format": 0 00:16:19.966 } 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "method": "bdev_wait_for_examine" 00:16:19.966 } 00:16:19.966 ] 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "subsystem": "scsi", 00:16:19.966 "config": null 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "subsystem": "scheduler", 00:16:19.966 "config": [ 00:16:19.966 { 00:16:19.966 "method": "framework_set_scheduler", 00:16:19.966 "params": { 00:16:19.966 "name": "static" 00:16:19.966 } 00:16:19.966 } 00:16:19.966 ] 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "subsystem": "vhost_scsi", 00:16:19.966 "config": [] 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "subsystem": "vhost_blk", 00:16:19.966 "config": [] 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "subsystem": "ublk", 00:16:19.966 "config": [ 00:16:19.966 { 00:16:19.966 "method": "ublk_create_target", 00:16:19.966 "params": { 00:16:19.966 "cpumask": "1" 00:16:19.966 } 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "method": "ublk_start_disk", 00:16:19.966 "params": { 00:16:19.966 "bdev_name": "malloc0", 00:16:19.966 "ublk_id": 0, 00:16:19.966 "num_queues": 1, 00:16:19.966 "queue_depth": 128 00:16:19.966 } 00:16:19.966 } 00:16:19.966 ] 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "subsystem": "nbd", 00:16:19.966 "config": [] 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "subsystem": "nvmf", 00:16:19.966 "config": [ 00:16:19.966 { 00:16:19.966 "method": "nvmf_set_config", 00:16:19.966 "params": { 00:16:19.966 "discovery_filter": "match_any", 00:16:19.966 "admin_cmd_passthru": { 00:16:19.966 "identify_ctrlr": false 00:16:19.966 }, 00:16:19.966 "dhchap_digests": [ 00:16:19.966 "sha256", 00:16:19.966 "sha384", 00:16:19.966 "sha512" 00:16:19.966 ], 00:16:19.966 "dhchap_dhgroups": [ 00:16:19.966 "null", 00:16:19.966 "ffdhe2048", 00:16:19.966 "ffdhe3072", 00:16:19.966 "ffdhe4096", 00:16:19.966 "ffdhe6144", 00:16:19.966 "ffdhe8192" 00:16:19.966 ] 00:16:19.966 } 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "method": "nvmf_set_max_subsystems", 00:16:19.966 "params": { 00:16:19.966 "max_subsystems": 1024 00:16:19.966 } 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "method": "nvmf_set_crdt", 00:16:19.966 "params": { 00:16:19.966 "crdt1": 0, 00:16:19.966 "crdt2": 0, 00:16:19.966 "crdt3": 0 00:16:19.966 } 00:16:19.966 } 00:16:19.966 ] 00:16:19.966 }, 00:16:19.966 { 00:16:19.966 "subsystem": "iscsi", 00:16:19.966 "config": [ 00:16:19.966 { 00:16:19.966 "method": "iscsi_set_options", 00:16:19.966 "params": { 00:16:19.966 "node_base": "iqn.2016-06.io.spdk", 00:16:19.966 "max_sessions": 128, 00:16:19.966 "max_connections_per_session": 2, 00:16:19.966 "max_queue_depth": 64, 00:16:19.966 "default_time2wait": 2, 00:16:19.966 "default_time2retain": 20, 00:16:19.966 "first_burst_length": 8192, 00:16:19.966 "immediate_data": true, 00:16:19.966 "allow_duplicated_isid": false, 00:16:19.966 "error_recovery_level": 0, 00:16:19.966 "nop_timeout": 60, 00:16:19.966 "nop_in_interval": 30, 00:16:19.966 "disable_chap": false, 00:16:19.966 "require_chap": false, 00:16:19.966 "mutual_chap": false, 00:16:19.966 "chap_group": 0, 00:16:19.966 "max_large_datain_per_connection": 64, 00:16:19.966 "max_r2t_per_connection": 4, 00:16:19.966 "pdu_pool_size": 36864, 00:16:19.966 "immediate_data_pool_size": 16384, 00:16:19.966 "data_out_pool_size": 2048 00:16:19.966 } 00:16:19.966 } 00:16:19.966 ] 00:16:19.966 } 00:16:19.966 ] 00:16:19.966 }' 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73465 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73465 ']' 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73465 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73465 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.966 killing process with pid 73465 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73465' 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73465 00:16:19.966 09:48:07 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73465 00:16:21.348 [2024-12-05 09:48:08.598834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:21.348 [2024-12-05 09:48:08.636556] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:21.348 [2024-12-05 09:48:08.636712] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:21.348 [2024-12-05 09:48:08.645536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:21.348 [2024-12-05 09:48:08.645604] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:21.348 [2024-12-05 09:48:08.645619] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:21.348 [2024-12-05 09:48:08.645649] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:21.348 [2024-12-05 09:48:08.645802] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:22.733 09:48:09 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73520 00:16:22.733 09:48:09 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73520 00:16:22.733 09:48:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73520 ']' 00:16:22.733 09:48:09 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.733 09:48:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.733 09:48:09 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.733 09:48:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:22.733 "subsystems": [ 00:16:22.733 { 00:16:22.733 "subsystem": "fsdev", 00:16:22.733 "config": [ 00:16:22.733 { 00:16:22.733 "method": "fsdev_set_opts", 00:16:22.733 "params": { 00:16:22.733 "fsdev_io_pool_size": 65535, 00:16:22.733 "fsdev_io_cache_size": 256 00:16:22.733 } 00:16:22.733 } 00:16:22.733 ] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "keyring", 00:16:22.733 "config": [] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "iobuf", 00:16:22.733 "config": [ 00:16:22.733 { 00:16:22.733 "method": "iobuf_set_options", 00:16:22.733 "params": { 00:16:22.733 "small_pool_count": 8192, 00:16:22.733 "large_pool_count": 1024, 00:16:22.733 "small_bufsize": 8192, 00:16:22.733 "large_bufsize": 135168, 00:16:22.733 "enable_numa": false 00:16:22.733 } 00:16:22.733 } 00:16:22.733 ] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "sock", 00:16:22.733 "config": [ 00:16:22.733 { 00:16:22.733 "method": "sock_set_default_impl", 00:16:22.733 "params": { 00:16:22.733 "impl_name": "posix" 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "sock_impl_set_options", 00:16:22.733 "params": { 00:16:22.733 "impl_name": "ssl", 00:16:22.733 "recv_buf_size": 4096, 00:16:22.733 "send_buf_size": 4096, 00:16:22.733 "enable_recv_pipe": true, 00:16:22.733 "enable_quickack": false, 00:16:22.733 "enable_placement_id": 0, 00:16:22.733 "enable_zerocopy_send_server": true, 00:16:22.733 "enable_zerocopy_send_client": false, 00:16:22.733 "zerocopy_threshold": 0, 00:16:22.733 "tls_version": 0, 00:16:22.733 "enable_ktls": false 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "sock_impl_set_options", 00:16:22.733 "params": { 00:16:22.733 "impl_name": "posix", 00:16:22.733 "recv_buf_size": 2097152, 00:16:22.733 "send_buf_size": 2097152, 00:16:22.733 "enable_recv_pipe": true, 00:16:22.733 "enable_quickack": false, 00:16:22.733 "enable_placement_id": 0, 00:16:22.733 "enable_zerocopy_send_server": true, 00:16:22.733 "enable_zerocopy_send_client": false, 00:16:22.733 "zerocopy_threshold": 0, 00:16:22.733 "tls_version": 0, 00:16:22.733 "enable_ktls": false 00:16:22.733 } 00:16:22.733 } 00:16:22.733 ] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "vmd", 00:16:22.733 "config": [] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "accel", 00:16:22.733 "config": [ 00:16:22.733 { 00:16:22.733 "method": "accel_set_options", 00:16:22.733 "params": { 00:16:22.733 "small_cache_size": 128, 00:16:22.733 "large_cache_size": 16, 00:16:22.733 "task_count": 2048, 00:16:22.733 "sequence_count": 2048, 00:16:22.733 "buf_count": 2048 00:16:22.733 } 00:16:22.733 } 00:16:22.733 ] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "bdev", 00:16:22.733 "config": [ 00:16:22.733 { 00:16:22.733 "method": "bdev_set_options", 00:16:22.733 "params": { 00:16:22.733 "bdev_io_pool_size": 65535, 00:16:22.733 "bdev_io_cache_size": 256, 00:16:22.733 "bdev_auto_examine": true, 00:16:22.733 "iobuf_small_cache_size": 128, 00:16:22.733 "iobuf_large_cache_size": 16 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "bdev_raid_set_options", 00:16:22.733 "params": { 00:16:22.733 "process_window_size_kb": 1024, 00:16:22.733 "process_max_bandwidth_mb_sec": 0 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "bdev_iscsi_set_options", 00:16:22.733 "params": { 00:16:22.733 "timeout_sec": 30 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "bdev_nvme_set_options", 00:16:22.733 "params": { 00:16:22.733 "action_on_timeout": "none", 00:16:22.733 "timeout_us": 0, 00:16:22.733 "timeout_admin_us": 0, 00:16:22.733 "keep_alive_timeout_ms": 10000, 00:16:22.733 "arbitration_burst": 0, 00:16:22.733 "low_priority_weight": 0, 00:16:22.733 "medium_priority_weight": 0, 00:16:22.733 "high_priority_weight": 0, 00:16:22.733 "nvme_adminq_poll_period_us": 10000, 00:16:22.733 "nvme_ioq_poll_period_us": 0, 00:16:22.733 "io_queue_requests": 0, 00:16:22.733 "delay_cmd_submit": true, 00:16:22.733 "transport_retry_count": 4, 00:16:22.733 "bdev_retry_count": 3, 00:16:22.733 "transport_ack_timeout": 0, 00:16:22.733 "ctrlr_loss_timeout_sec": 0, 00:16:22.733 "reconnect_delay_sec": 0, 00:16:22.733 "fast_io_fail_timeout_sec": 0, 00:16:22.733 "disable_auto_failback": false, 00:16:22.733 "generate_uuids": false, 00:16:22.733 "transport_tos": 0, 00:16:22.733 "nvme_error_stat": false, 00:16:22.733 "rdma_srq_size": 0, 00:16:22.733 "io_path_stat": false, 00:16:22.733 "allow_accel_sequence": false, 00:16:22.733 "rdma_max_cq_size": 0, 00:16:22.733 "rdma_cm_event_timeout_ms": 0, 00:16:22.733 "dhchap_digests": [ 00:16:22.733 "sha256", 00:16:22.733 "sha384", 00:16:22.733 "sha512" 00:16:22.733 ], 00:16:22.733 "dhchap_dhgroups": [ 00:16:22.733 "null", 00:16:22.733 "ffdhe2048", 00:16:22.733 "ffdhe3072", 00:16:22.733 "ffdhe4096", 00:16:22.733 "ffdhe6144", 00:16:22.733 "ffdhe8192" 00:16:22.733 ] 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "bdev_nvme_set_hotplug", 00:16:22.733 "params": { 00:16:22.733 "period_us": 100000, 00:16:22.733 "enable": false 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "bdev_malloc_create", 00:16:22.733 "params": { 00:16:22.733 "name": "malloc0", 00:16:22.733 "num_blocks": 8192, 00:16:22.733 "block_size": 4096, 00:16:22.733 "physical_block_size": 4096, 00:16:22.733 "uuid": "6a371f7f-bc3b-4f6b-8ffb-cd44cc206620", 00:16:22.733 "optimal_io_boundary": 0, 00:16:22.733 "md_size": 0, 00:16:22.733 "dif_type": 0, 00:16:22.733 "dif_is_head_of_md": false, 00:16:22.733 "dif_pi_format": 0 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "bdev_wait_for_examine" 00:16:22.733 } 00:16:22.733 ] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "scsi", 00:16:22.733 "config": null 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "scheduler", 00:16:22.733 "config": [ 00:16:22.733 { 00:16:22.733 "method": "framework_set_scheduler", 00:16:22.733 "params": { 00:16:22.733 "name": "static" 00:16:22.733 } 00:16:22.733 } 00:16:22.733 ] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "vhost_scsi", 00:16:22.733 "config": [] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "vhost_blk", 00:16:22.733 "config": [] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "ublk", 00:16:22.733 "config": [ 00:16:22.733 { 00:16:22.733 "method": "ublk_create_target", 00:16:22.733 "params": { 00:16:22.733 "cpumask": "1" 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "method": "ublk_start_disk", 00:16:22.733 "params": { 00:16:22.733 "bdev_name": "malloc0", 00:16:22.733 "ublk_id": 0, 00:16:22.733 "num_queues": 1, 00:16:22.733 "queue_depth": 128 00:16:22.733 } 00:16:22.733 } 00:16:22.733 ] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "nbd", 00:16:22.733 "config": [] 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "subsystem": "nvmf", 00:16:22.733 "config": [ 00:16:22.733 { 00:16:22.733 "method": "nvmf_set_config", 00:16:22.733 "params": { 00:16:22.734 "discovery_filter": "match_any", 00:16:22.734 "admin_cmd_passthru": { 00:16:22.734 "identify_ctrlr": false 00:16:22.734 }, 00:16:22.734 "dhchap_digests": [ 00:16:22.734 "sha256", 00:16:22.734 "sha384", 00:16:22.734 "sha512" 00:16:22.734 ], 00:16:22.734 "dhchap_dhgroups": [ 00:16:22.734 "null", 00:16:22.734 "ffdhe2048", 00:16:22.734 "ffdhe3072", 00:16:22.734 "ffdhe4096", 00:16:22.734 "ffdhe6144", 00:16:22.734 "ffdhe8192" 00:16:22.734 ] 00:16:22.734 } 00:16:22.734 }, 00:16:22.734 { 00:16:22.734 "method": "nvmf_set_max_subsystems", 00:16:22.734 "params": { 00:16:22.734 "max_subsystems": 1024 00:16:22.734 } 00:16:22.734 }, 00:16:22.734 { 00:16:22.734 "method": "nvmf_set_crdt", 00:16:22.734 "params": { 00:16:22.734 "crdt1": 0, 00:16:22.734 "crdt2": 0, 00:16:22.734 "crdt3": 0 00:16:22.734 } 00:16:22.734 } 00:16:22.734 ] 00:16:22.734 }, 00:16:22.734 { 00:16:22.734 "subsystem": "iscsi", 00:16:22.734 "config": [ 00:16:22.734 { 00:16:22.734 "method": "iscsi_set_options", 00:16:22.734 "params": { 00:16:22.734 "node_base": "iqn.2016-06.io.spdk", 00:16:22.734 "max_sessions": 128, 00:16:22.734 "max_connections_per_session": 2, 00:16:22.734 "max_queue_depth": 64, 00:16:22.734 "default_time2wait": 2, 00:16:22.734 "default_time2retain": 20, 00:16:22.734 "first_burst_length": 8192, 00:16:22.734 "immediate_data": true, 00:16:22.734 "allow_duplicated_isid": false, 00:16:22.734 "error_recovery_level": 0, 00:16:22.734 "nop_timeout": 60, 00:16:22.734 "nop_in_interval": 30, 00:16:22.734 "disable_chap": false, 00:16:22.734 "require_chap": false, 00:16:22.734 "mutual_chap": false, 00:16:22.734 "chap_group": 0, 00:16:22.734 "max_large_datain_per_connection": 64, 00:16:22.734 "max_r2t_per_connection": 4, 00:16:22.734 "pdu_pool_size": 36864, 00:16:22.734 "immediate_data_pool_size": 16384, 00:16:22.734 "data_out_pool_size": 2048 00:16:22.734 } 00:16:22.734 } 00:16:22.734 ] 00:16:22.734 } 00:16:22.734 ] 00:16:22.734 }' 00:16:22.734 09:48:09 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.734 09:48:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.734 09:48:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:22.734 [2024-12-05 09:48:10.020894] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:22.734 [2024-12-05 09:48:10.021032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73520 ] 00:16:22.734 [2024-12-05 09:48:10.170314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.734 [2024-12-05 09:48:10.245629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.307 [2024-12-05 09:48:10.877531] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:23.307 [2024-12-05 09:48:10.878159] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:23.307 [2024-12-05 09:48:10.885611] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:23.307 [2024-12-05 09:48:10.885666] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:23.307 [2024-12-05 09:48:10.885674] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:23.307 [2024-12-05 09:48:10.885679] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:23.307 [2024-12-05 09:48:10.894576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:23.307 [2024-12-05 09:48:10.894593] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:23.307 [2024-12-05 09:48:10.901529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:23.307 [2024-12-05 09:48:10.901598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:23.307 [2024-12-05 09:48:10.918533] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:23.568 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.568 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:23.568 09:48:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:23.568 09:48:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:23.568 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.568 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73520 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73520 ']' 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73520 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.569 09:48:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73520 00:16:23.569 killing process with pid 73520 00:16:23.569 09:48:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.569 09:48:11 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.569 09:48:11 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73520' 00:16:23.569 09:48:11 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73520 00:16:23.569 09:48:11 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73520 00:16:24.512 [2024-12-05 09:48:12.022388] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:24.512 [2024-12-05 09:48:12.055550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:24.512 [2024-12-05 09:48:12.055718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:24.512 [2024-12-05 09:48:12.063533] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:24.512 [2024-12-05 09:48:12.063636] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:24.512 [2024-12-05 09:48:12.063657] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:24.512 [2024-12-05 09:48:12.063717] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:24.512 [2024-12-05 09:48:12.063868] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:25.899 09:48:13 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:25.899 ************************************ 00:16:25.899 END TEST test_save_ublk_config 00:16:25.899 ************************************ 00:16:25.899 00:16:25.899 real 0m7.272s 00:16:25.899 user 0m5.015s 00:16:25.899 sys 0m2.899s 00:16:25.899 09:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.899 09:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:25.899 09:48:13 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73587 00:16:25.899 09:48:13 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.899 09:48:13 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:25.899 09:48:13 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73587 00:16:25.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.899 09:48:13 ublk -- common/autotest_common.sh@835 -- # '[' -z 73587 ']' 00:16:25.899 09:48:13 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.899 09:48:13 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.899 09:48:13 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.899 09:48:13 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.899 09:48:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.899 [2024-12-05 09:48:13.347778] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:25.899 [2024-12-05 09:48:13.347900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73587 ] 00:16:25.899 [2024-12-05 09:48:13.507319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.160 [2024-12-05 09:48:13.608955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.160 [2024-12-05 09:48:13.609064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.733 09:48:14 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.733 09:48:14 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:26.733 09:48:14 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:26.733 09:48:14 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.733 09:48:14 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.733 09:48:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.733 ************************************ 00:16:26.733 START TEST test_create_ublk 00:16:26.733 ************************************ 00:16:26.733 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:26.733 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:26.733 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.733 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.733 [2024-12-05 09:48:14.276537] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:26.733 [2024-12-05 09:48:14.278793] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:26.733 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.733 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:26.733 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:26.733 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.733 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.995 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:26.995 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.995 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.995 [2024-12-05 09:48:14.528702] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:26.995 [2024-12-05 09:48:14.529141] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:26.995 [2024-12-05 09:48:14.529158] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:26.995 [2024-12-05 09:48:14.529166] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:26.995 [2024-12-05 09:48:14.536573] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:26.995 [2024-12-05 09:48:14.536600] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:26.995 [2024-12-05 09:48:14.544564] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:26.995 [2024-12-05 09:48:14.545274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:26.995 [2024-12-05 09:48:14.559655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:26.995 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:26.995 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.995 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.995 09:48:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:26.995 { 00:16:26.995 "ublk_device": "/dev/ublkb0", 00:16:26.995 "id": 0, 00:16:26.995 "queue_depth": 512, 00:16:26.995 "num_queues": 4, 00:16:26.995 "bdev_name": "Malloc0" 00:16:26.995 } 00:16:26.995 ]' 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:26.995 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:27.256 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:27.256 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:27.256 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:27.256 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:27.256 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:27.256 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:27.256 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:27.256 09:48:14 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:27.256 09:48:14 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:27.256 fio: verification read phase will never start because write phase uses all of runtime 00:16:27.257 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:27.257 fio-3.35 00:16:27.257 Starting 1 process 00:16:39.466 00:16:39.466 fio_test: (groupid=0, jobs=1): err= 0: pid=73639: Thu Dec 5 09:48:24 2024 00:16:39.466 write: IOPS=15.6k, BW=61.0MiB/s (64.0MB/s)(610MiB/10001msec); 0 zone resets 00:16:39.466 clat (usec): min=30, max=4208, avg=63.26, stdev=85.88 00:16:39.466 lat (usec): min=31, max=4226, avg=63.69, stdev=85.89 00:16:39.466 clat percentiles (usec): 00:16:39.466 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 56], 00:16:39.466 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:16:39.466 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 69], 95.00th=[ 73], 00:16:39.466 | 99.00th=[ 82], 99.50th=[ 90], 99.90th=[ 1352], 99.95th=[ 2540], 00:16:39.466 | 99.99th=[ 3589] 00:16:39.466 bw ( KiB/s): min=53432, max=79128, per=100.00%, avg=62585.68, stdev=4988.53, samples=19 00:16:39.466 iops : min=13358, max=19782, avg=15646.42, stdev=1247.13, samples=19 00:16:39.467 lat (usec) : 50=5.94%, 100=93.68%, 250=0.22%, 500=0.02%, 750=0.01% 00:16:39.467 lat (usec) : 1000=0.01% 00:16:39.467 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:16:39.467 cpu : usr=1.81%, sys=9.32%, ctx=156254, majf=0, minf=797 00:16:39.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.467 issued rwts: total=0,156254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.467 00:16:39.467 Run status group 0 (all jobs): 00:16:39.467 WRITE: bw=61.0MiB/s (64.0MB/s), 61.0MiB/s-61.0MiB/s (64.0MB/s-64.0MB/s), io=610MiB (640MB), run=10001-10001msec 00:16:39.467 00:16:39.467 Disk stats (read/write): 00:16:39.467 ublkb0: ios=0/154665, merge=0/0, ticks=0/8811, in_queue=8812, util=98.96% 00:16:39.467 09:48:24 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:39.467 09:48:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 [2024-12-05 09:48:24.963733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.467 [2024-12-05 09:48:24.999561] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.467 [2024-12-05 09:48:25.000258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.467 [2024-12-05 09:48:25.009584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.467 [2024-12-05 09:48:25.009830] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.467 [2024-12-05 09:48:25.009838] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 [2024-12-05 09:48:25.024590] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:39.467 request: 00:16:39.467 { 00:16:39.467 "ublk_id": 0, 00:16:39.467 "method": "ublk_stop_disk", 00:16:39.467 "req_id": 1 00:16:39.467 } 00:16:39.467 Got JSON-RPC error response 00:16:39.467 response: 00:16:39.467 { 00:16:39.467 "code": -19, 00:16:39.467 "message": "No such device" 00:16:39.467 } 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.467 09:48:25 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 [2024-12-05 09:48:25.040585] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.467 [2024-12-05 09:48:25.048524] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:39.467 [2024-12-05 09:48:25.048555] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:39.467 09:48:25 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:39.467 09:48:25 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:39.467 09:48:25 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:39.467 09:48:25 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:39.467 09:48:25 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:39.467 09:48:25 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:39.467 00:16:39.467 real 0m11.225s 00:16:39.467 user 0m0.475s 00:16:39.467 sys 0m0.998s 00:16:39.467 ************************************ 00:16:39.467 END TEST test_create_ublk 00:16:39.467 ************************************ 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 09:48:25 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:39.467 09:48:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.467 09:48:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.467 09:48:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 ************************************ 00:16:39.467 START TEST test_create_multi_ublk 00:16:39.467 ************************************ 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 [2024-12-05 09:48:25.539525] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:39.467 [2024-12-05 09:48:25.541071] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 [2024-12-05 09:48:25.767640] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:39.467 [2024-12-05 09:48:25.767963] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:39.467 [2024-12-05 09:48:25.767976] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:39.467 [2024-12-05 09:48:25.767985] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.467 [2024-12-05 09:48:25.791539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.467 [2024-12-05 09:48:25.791560] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.467 [2024-12-05 09:48:25.803531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.467 [2024-12-05 09:48:25.804037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:39.467 [2024-12-05 09:48:25.843543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.467 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:39.467 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:39.467 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.467 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.467 [2024-12-05 09:48:26.067642] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:39.467 [2024-12-05 09:48:26.067951] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:39.467 [2024-12-05 09:48:26.067967] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:39.467 [2024-12-05 09:48:26.067972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.467 [2024-12-05 09:48:26.075545] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.467 [2024-12-05 09:48:26.075561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.468 [2024-12-05 09:48:26.083540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.468 [2024-12-05 09:48:26.084039] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:39.468 [2024-12-05 09:48:26.100542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.468 [2024-12-05 09:48:26.266617] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:39.468 [2024-12-05 09:48:26.266914] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:39.468 [2024-12-05 09:48:26.266926] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:39.468 [2024-12-05 09:48:26.266932] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.468 [2024-12-05 09:48:26.274560] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.468 [2024-12-05 09:48:26.274580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.468 [2024-12-05 09:48:26.282541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.468 [2024-12-05 09:48:26.283034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:39.468 [2024-12-05 09:48:26.306535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.468 [2024-12-05 09:48:26.466636] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:39.468 [2024-12-05 09:48:26.466933] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:39.468 [2024-12-05 09:48:26.466946] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:39.468 [2024-12-05 09:48:26.466951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.468 [2024-12-05 09:48:26.474541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.468 [2024-12-05 09:48:26.474558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.468 [2024-12-05 09:48:26.482534] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.468 [2024-12-05 09:48:26.483030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:39.468 [2024-12-05 09:48:26.487208] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:39.468 { 00:16:39.468 "ublk_device": "/dev/ublkb0", 00:16:39.468 "id": 0, 00:16:39.468 "queue_depth": 512, 00:16:39.468 "num_queues": 4, 00:16:39.468 "bdev_name": "Malloc0" 00:16:39.468 }, 00:16:39.468 { 00:16:39.468 "ublk_device": "/dev/ublkb1", 00:16:39.468 "id": 1, 00:16:39.468 "queue_depth": 512, 00:16:39.468 "num_queues": 4, 00:16:39.468 "bdev_name": "Malloc1" 00:16:39.468 }, 00:16:39.468 { 00:16:39.468 "ublk_device": "/dev/ublkb2", 00:16:39.468 "id": 2, 00:16:39.468 "queue_depth": 512, 00:16:39.468 "num_queues": 4, 00:16:39.468 "bdev_name": "Malloc2" 00:16:39.468 }, 00:16:39.468 { 00:16:39.468 "ublk_device": "/dev/ublkb3", 00:16:39.468 "id": 3, 00:16:39.468 "queue_depth": 512, 00:16:39.468 "num_queues": 4, 00:16:39.468 "bdev_name": "Malloc3" 00:16:39.468 } 00:16:39.468 ]' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.468 09:48:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:39.468 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:39.468 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:39.468 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:39.468 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:39.468 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.468 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.727 [2024-12-05 09:48:27.150617] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.727 [2024-12-05 09:48:27.192086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.727 [2024-12-05 09:48:27.193271] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.727 [2024-12-05 09:48:27.198539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.727 [2024-12-05 09:48:27.198790] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.727 [2024-12-05 09:48:27.198804] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.727 [2024-12-05 09:48:27.213629] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.727 [2024-12-05 09:48:27.252575] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.727 [2024-12-05 09:48:27.253418] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.727 [2024-12-05 09:48:27.256772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.727 [2024-12-05 09:48:27.257019] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:39.727 [2024-12-05 09:48:27.257032] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.727 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 [2024-12-05 09:48:27.275599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.728 [2024-12-05 09:48:27.309560] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.728 [2024-12-05 09:48:27.310331] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.728 [2024-12-05 09:48:27.322538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.728 [2024-12-05 09:48:27.322767] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:39.728 [2024-12-05 09:48:27.322780] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:39.728 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.728 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.728 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:39.728 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.728 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 [2024-12-05 09:48:27.326685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.986 [2024-12-05 09:48:27.365561] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.986 [2024-12-05 09:48:27.366229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.986 [2024-12-05 09:48:27.377531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.986 [2024-12-05 09:48:27.377770] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:39.986 [2024-12-05 09:48:27.377784] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:39.986 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.986 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:39.986 [2024-12-05 09:48:27.564578] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.986 [2024-12-05 09:48:27.572882] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:39.986 [2024-12-05 09:48:27.572909] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:39.986 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:39.986 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.986 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:39.986 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.986 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.551 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.551 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.551 09:48:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:40.551 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.551 09:48:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.809 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.809 09:48:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.809 09:48:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:40.809 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.809 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.068 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:41.326 ************************************ 00:16:41.326 END TEST test_create_multi_ublk 00:16:41.326 ************************************ 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:41.326 00:16:41.326 real 0m3.254s 00:16:41.326 user 0m0.809s 00:16:41.326 sys 0m0.147s 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.326 09:48:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.326 09:48:28 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:41.326 09:48:28 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:41.326 09:48:28 ublk -- ublk/ublk.sh@130 -- # killprocess 73587 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@954 -- # '[' -z 73587 ']' 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@958 -- # kill -0 73587 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@959 -- # uname 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73587 00:16:41.326 killing process with pid 73587 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73587' 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@973 -- # kill 73587 00:16:41.326 09:48:28 ublk -- common/autotest_common.sh@978 -- # wait 73587 00:16:41.892 [2024-12-05 09:48:29.363138] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:41.892 [2024-12-05 09:48:29.363192] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:42.461 00:16:42.461 real 0m24.244s 00:16:42.461 user 0m35.017s 00:16:42.461 sys 0m8.677s 00:16:42.461 09:48:30 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.461 ************************************ 00:16:42.461 END TEST ublk 00:16:42.461 ************************************ 00:16:42.461 09:48:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:42.461 09:48:30 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:42.461 09:48:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:42.461 09:48:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.461 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:16:42.461 ************************************ 00:16:42.461 START TEST ublk_recovery 00:16:42.461 ************************************ 00:16:42.461 09:48:30 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:42.722 * Looking for test storage... 00:16:42.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:42.722 09:48:30 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:42.722 09:48:30 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:42.722 09:48:30 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:42.722 09:48:30 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:42.722 09:48:30 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.723 09:48:30 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:42.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.723 --rc genhtml_branch_coverage=1 00:16:42.723 --rc genhtml_function_coverage=1 00:16:42.723 --rc genhtml_legend=1 00:16:42.723 --rc geninfo_all_blocks=1 00:16:42.723 --rc geninfo_unexecuted_blocks=1 00:16:42.723 00:16:42.723 ' 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:42.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.723 --rc genhtml_branch_coverage=1 00:16:42.723 --rc genhtml_function_coverage=1 00:16:42.723 --rc genhtml_legend=1 00:16:42.723 --rc geninfo_all_blocks=1 00:16:42.723 --rc geninfo_unexecuted_blocks=1 00:16:42.723 00:16:42.723 ' 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:42.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.723 --rc genhtml_branch_coverage=1 00:16:42.723 --rc genhtml_function_coverage=1 00:16:42.723 --rc genhtml_legend=1 00:16:42.723 --rc geninfo_all_blocks=1 00:16:42.723 --rc geninfo_unexecuted_blocks=1 00:16:42.723 00:16:42.723 ' 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:42.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.723 --rc genhtml_branch_coverage=1 00:16:42.723 --rc genhtml_function_coverage=1 00:16:42.723 --rc genhtml_legend=1 00:16:42.723 --rc geninfo_all_blocks=1 00:16:42.723 --rc geninfo_unexecuted_blocks=1 00:16:42.723 00:16:42.723 ' 00:16:42.723 09:48:30 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:42.723 09:48:30 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:42.723 09:48:30 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:42.723 09:48:30 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:42.723 09:48:30 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:42.723 09:48:30 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:42.723 09:48:30 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:42.723 09:48:30 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:42.723 09:48:30 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:42.723 09:48:30 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:42.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.723 09:48:30 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73986 00:16:42.723 09:48:30 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:42.723 09:48:30 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73986 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73986 ']' 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.723 09:48:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.723 09:48:30 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:42.723 [2024-12-05 09:48:30.276887] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:42.723 [2024-12-05 09:48:30.276986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73986 ] 00:16:42.981 [2024-12-05 09:48:30.427035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:42.981 [2024-12-05 09:48:30.509861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.981 [2024-12-05 09:48:30.509940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.547 09:48:31 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.547 09:48:31 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:43.547 09:48:31 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:43.547 09:48:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.547 09:48:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.547 [2024-12-05 09:48:31.124528] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:43.547 [2024-12-05 09:48:31.126075] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:43.547 09:48:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.547 09:48:31 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:43.547 09:48:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.547 09:48:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.806 malloc0 00:16:43.806 09:48:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.806 09:48:31 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:43.806 09:48:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.806 09:48:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.806 [2024-12-05 09:48:31.204849] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:43.806 [2024-12-05 09:48:31.204933] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:43.806 [2024-12-05 09:48:31.204942] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:43.806 [2024-12-05 09:48:31.204948] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.806 [2024-12-05 09:48:31.212544] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.806 [2024-12-05 09:48:31.212564] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.806 [2024-12-05 09:48:31.220539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.806 [2024-12-05 09:48:31.220651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:43.806 [2024-12-05 09:48:31.235536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.806 1 00:16:43.806 09:48:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.806 09:48:31 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:44.737 09:48:32 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74022 00:16:44.737 09:48:32 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:44.737 09:48:32 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:44.737 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:44.737 fio-3.35 00:16:44.737 Starting 1 process 00:16:50.003 09:48:37 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73986 00:16:50.003 09:48:37 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:55.280 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73986 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:55.280 09:48:42 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74133 00:16:55.280 09:48:42 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:55.280 09:48:42 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.280 09:48:42 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74133 00:16:55.280 09:48:42 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74133 ']' 00:16:55.280 09:48:42 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.280 09:48:42 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.280 09:48:42 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.280 09:48:42 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.280 09:48:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.280 [2024-12-05 09:48:42.360396] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:55.280 [2024-12-05 09:48:42.360590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74133 ] 00:16:55.280 [2024-12-05 09:48:42.522049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:55.280 [2024-12-05 09:48:42.619953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.280 [2024-12-05 09:48:42.620123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:55.849 09:48:43 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.849 [2024-12-05 09:48:43.222531] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:55.849 [2024-12-05 09:48:43.224442] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.849 09:48:43 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.849 malloc0 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.849 09:48:43 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.849 [2024-12-05 09:48:43.326665] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:55.849 [2024-12-05 09:48:43.326707] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:55.849 [2024-12-05 09:48:43.326717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:55.849 [2024-12-05 09:48:43.334567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:55.849 [2024-12-05 09:48:43.334593] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:55.849 1 00:16:55.849 09:48:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.849 09:48:43 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74022 00:16:56.790 [2024-12-05 09:48:44.337568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:56.790 [2024-12-05 09:48:44.345546] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:56.790 [2024-12-05 09:48:44.345572] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:57.728 [2024-12-05 09:48:45.345603] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:57.728 [2024-12-05 09:48:45.349538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:57.728 [2024-12-05 09:48:45.349552] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:59.101 [2024-12-05 09:48:46.349579] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:59.101 [2024-12-05 09:48:46.353543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:59.101 [2024-12-05 09:48:46.353554] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:59.101 [2024-12-05 09:48:46.353562] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:59.101 [2024-12-05 09:48:46.353630] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:21.022 [2024-12-05 09:49:07.659534] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:21.022 [2024-12-05 09:49:07.666090] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:21.022 [2024-12-05 09:49:07.673703] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:21.022 [2024-12-05 09:49:07.673722] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:47.695 00:17:47.695 fio_test: (groupid=0, jobs=1): err= 0: pid=74025: Thu Dec 5 09:49:32 2024 00:17:47.695 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(3443MiB/60001msec) 00:17:47.695 slat (nsec): min=984, max=436675, avg=4877.45, stdev=1396.05 00:17:47.695 clat (usec): min=920, max=30433k, avg=4003.62, stdev=240364.07 00:17:47.695 lat (usec): min=922, max=30433k, avg=4008.50, stdev=240364.07 00:17:47.695 clat percentiles (usec): 00:17:47.695 | 1.00th=[ 1778], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1942], 00:17:47.695 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2008], 00:17:47.695 | 70.00th=[ 2024], 80.00th=[ 2040], 90.00th=[ 2073], 95.00th=[ 2900], 00:17:47.695 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 7111], 99.95th=[12387], 00:17:47.695 | 99.99th=[13304] 00:17:47.695 bw ( KiB/s): min=36584, max=124768, per=100.00%, avg=117659.66, stdev=15237.66, samples=59 00:17:47.695 iops : min= 9146, max=31192, avg=29414.92, stdev=3809.41, samples=59 00:17:47.695 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(3438MiB/60001msec); 0 zone resets 00:17:47.695 slat (nsec): min=975, max=109915, avg=4899.43, stdev=1145.56 00:17:47.695 clat (usec): min=983, max=30433k, avg=4705.81, stdev=277127.16 00:17:47.695 lat (usec): min=988, max=30433k, avg=4710.71, stdev=277127.17 00:17:47.695 clat percentiles (usec): 00:17:47.695 | 1.00th=[ 1811], 5.00th=[ 1975], 10.00th=[ 2008], 20.00th=[ 2040], 00:17:47.695 | 30.00th=[ 2057], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:17:47.695 | 70.00th=[ 2114], 80.00th=[ 2114], 90.00th=[ 2180], 95.00th=[ 2802], 00:17:47.695 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7177], 99.95th=[12518], 00:17:47.695 | 99.99th=[13435] 00:17:47.695 bw ( KiB/s): min=37200, max=123208, per=100.00%, avg=117482.58, stdev=15203.07, samples=59 00:17:47.695 iops : min= 9300, max=30802, avg=29370.64, stdev=3800.77, samples=59 00:17:47.695 lat (usec) : 1000=0.01% 00:17:47.695 lat (msec) : 2=33.78%, 4=63.71%, 10=2.45%, 20=0.05%, >=2000=0.01% 00:17:47.695 cpu : usr=3.31%, sys=14.70%, ctx=58070, majf=0, minf=13 00:17:47.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:47.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:47.695 issued rwts: total=881451,880099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:47.695 00:17:47.695 Run status group 0 (all jobs): 00:17:47.695 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=3443MiB (3610MB), run=60001-60001msec 00:17:47.695 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=3438MiB (3605MB), run=60001-60001msec 00:17:47.695 00:17:47.695 Disk stats (read/write): 00:17:47.695 ublkb1: ios=878220/876799, merge=0/0, ticks=3477937/4018469, in_queue=7496406, util=99.91% 00:17:47.695 09:49:32 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.695 [2024-12-05 09:49:32.506946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:47.695 [2024-12-05 09:49:32.545646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:47.695 [2024-12-05 09:49:32.545786] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:47.695 [2024-12-05 09:49:32.547864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:47.695 [2024-12-05 09:49:32.551600] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:47.695 [2024-12-05 09:49:32.551611] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.695 09:49:32 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.695 [2024-12-05 09:49:32.555950] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:47.695 [2024-12-05 09:49:32.562523] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:47.695 [2024-12-05 09:49:32.562554] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.695 09:49:32 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:47.695 09:49:32 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:47.695 09:49:32 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74133 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74133 ']' 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74133 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74133 00:17:47.695 killing process with pid 74133 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74133' 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74133 00:17:47.695 09:49:32 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74133 00:17:47.695 [2024-12-05 09:49:33.695003] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:47.695 [2024-12-05 09:49:33.695059] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:47.695 00:17:47.695 real 1m4.451s 00:17:47.695 user 1m47.297s 00:17:47.695 sys 0m21.684s 00:17:47.695 ************************************ 00:17:47.695 END TEST ublk_recovery 00:17:47.695 ************************************ 00:17:47.695 09:49:34 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.695 09:49:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.695 09:49:34 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:47.695 09:49:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:47.695 09:49:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.695 09:49:34 -- common/autotest_common.sh@10 -- # set +x 00:17:47.695 09:49:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:47.695 09:49:34 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.695 09:49:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:47.695 09:49:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.695 09:49:34 -- common/autotest_common.sh@10 -- # set +x 00:17:47.695 ************************************ 00:17:47.695 START TEST ftl 00:17:47.695 ************************************ 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.695 * Looking for test storage... 00:17:47.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.695 09:49:34 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.695 09:49:34 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.695 09:49:34 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.695 09:49:34 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.695 09:49:34 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.695 09:49:34 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.695 09:49:34 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.695 09:49:34 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.695 09:49:34 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.695 09:49:34 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.695 09:49:34 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.695 09:49:34 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:47.695 09:49:34 ftl -- scripts/common.sh@345 -- # : 1 00:17:47.695 09:49:34 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.695 09:49:34 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.695 09:49:34 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:47.695 09:49:34 ftl -- scripts/common.sh@353 -- # local d=1 00:17:47.695 09:49:34 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.695 09:49:34 ftl -- scripts/common.sh@355 -- # echo 1 00:17:47.695 09:49:34 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.695 09:49:34 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:47.695 09:49:34 ftl -- scripts/common.sh@353 -- # local d=2 00:17:47.695 09:49:34 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.695 09:49:34 ftl -- scripts/common.sh@355 -- # echo 2 00:17:47.695 09:49:34 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.695 09:49:34 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.695 09:49:34 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.695 09:49:34 ftl -- scripts/common.sh@368 -- # return 0 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.695 --rc genhtml_branch_coverage=1 00:17:47.695 --rc genhtml_function_coverage=1 00:17:47.695 --rc genhtml_legend=1 00:17:47.695 --rc geninfo_all_blocks=1 00:17:47.695 --rc geninfo_unexecuted_blocks=1 00:17:47.695 00:17:47.695 ' 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.695 --rc genhtml_branch_coverage=1 00:17:47.695 --rc genhtml_function_coverage=1 00:17:47.695 --rc genhtml_legend=1 00:17:47.695 --rc geninfo_all_blocks=1 00:17:47.695 --rc geninfo_unexecuted_blocks=1 00:17:47.695 00:17:47.695 ' 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.695 --rc genhtml_branch_coverage=1 00:17:47.695 --rc genhtml_function_coverage=1 00:17:47.695 --rc genhtml_legend=1 00:17:47.695 --rc geninfo_all_blocks=1 00:17:47.695 --rc geninfo_unexecuted_blocks=1 00:17:47.695 00:17:47.695 ' 00:17:47.695 09:49:34 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.695 --rc genhtml_branch_coverage=1 00:17:47.695 --rc genhtml_function_coverage=1 00:17:47.695 --rc genhtml_legend=1 00:17:47.695 --rc geninfo_all_blocks=1 00:17:47.695 --rc geninfo_unexecuted_blocks=1 00:17:47.695 00:17:47.695 ' 00:17:47.695 09:49:34 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:47.695 09:49:34 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.695 09:49:34 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.695 09:49:34 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.695 09:49:34 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:47.695 09:49:34 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:47.695 09:49:34 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.695 09:49:34 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:47.695 09:49:34 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:47.695 09:49:34 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.695 09:49:34 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.695 09:49:34 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:47.695 09:49:34 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:47.695 09:49:34 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.695 09:49:34 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.695 09:49:34 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:47.695 09:49:34 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:47.695 09:49:34 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.695 09:49:34 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.695 09:49:34 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:47.695 09:49:34 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:47.695 09:49:34 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.695 09:49:34 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.695 09:49:34 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.695 09:49:34 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.695 09:49:34 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:47.695 09:49:34 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:47.695 09:49:34 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.695 09:49:34 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.695 09:49:34 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.695 09:49:34 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:47.695 09:49:34 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:47.695 09:49:34 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:47.695 09:49:34 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:47.695 09:49:34 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:47.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:47.695 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.695 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.695 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.695 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.695 09:49:35 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74938 00:17:47.695 09:49:35 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74938 00:17:47.695 09:49:35 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:47.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.695 09:49:35 ftl -- common/autotest_common.sh@835 -- # '[' -z 74938 ']' 00:17:47.695 09:49:35 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.695 09:49:35 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.695 09:49:35 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.695 09:49:35 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.695 09:49:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:47.695 [2024-12-05 09:49:35.299783] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:47.695 [2024-12-05 09:49:35.300107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74938 ] 00:17:47.954 [2024-12-05 09:49:35.459506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.954 [2024-12-05 09:49:35.566309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.526 09:49:36 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.526 09:49:36 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:48.526 09:49:36 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:48.786 09:49:36 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:49.728 09:49:37 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:49.728 09:49:37 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:49.987 09:49:37 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:49.987 09:49:37 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:49.987 09:49:37 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:50.247 09:49:37 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:50.247 09:49:37 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:50.247 09:49:37 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:50.247 09:49:37 ftl -- ftl/ftl.sh@50 -- # break 00:17:50.247 09:49:37 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:50.247 09:49:37 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:50.248 09:49:37 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:50.248 09:49:37 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:50.508 09:49:37 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:50.508 09:49:37 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:50.508 09:49:37 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:50.508 09:49:37 ftl -- ftl/ftl.sh@63 -- # break 00:17:50.508 09:49:37 ftl -- ftl/ftl.sh@66 -- # killprocess 74938 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 74938 ']' 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@958 -- # kill -0 74938 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@959 -- # uname 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74938 00:17:50.508 killing process with pid 74938 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74938' 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@973 -- # kill 74938 00:17:50.508 09:49:37 ftl -- common/autotest_common.sh@978 -- # wait 74938 00:17:51.893 09:49:39 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:51.893 09:49:39 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:51.893 09:49:39 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:51.893 09:49:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.893 09:49:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:51.893 ************************************ 00:17:51.893 START TEST ftl_fio_basic 00:17:51.893 ************************************ 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:51.893 * Looking for test storage... 00:17:51.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.893 --rc genhtml_branch_coverage=1 00:17:51.893 --rc genhtml_function_coverage=1 00:17:51.893 --rc genhtml_legend=1 00:17:51.893 --rc geninfo_all_blocks=1 00:17:51.893 --rc geninfo_unexecuted_blocks=1 00:17:51.893 00:17:51.893 ' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.893 --rc genhtml_branch_coverage=1 00:17:51.893 --rc genhtml_function_coverage=1 00:17:51.893 --rc genhtml_legend=1 00:17:51.893 --rc geninfo_all_blocks=1 00:17:51.893 --rc geninfo_unexecuted_blocks=1 00:17:51.893 00:17:51.893 ' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.893 --rc genhtml_branch_coverage=1 00:17:51.893 --rc genhtml_function_coverage=1 00:17:51.893 --rc genhtml_legend=1 00:17:51.893 --rc geninfo_all_blocks=1 00:17:51.893 --rc geninfo_unexecuted_blocks=1 00:17:51.893 00:17:51.893 ' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.893 --rc genhtml_branch_coverage=1 00:17:51.893 --rc genhtml_function_coverage=1 00:17:51.893 --rc genhtml_legend=1 00:17:51.893 --rc geninfo_all_blocks=1 00:17:51.893 --rc geninfo_unexecuted_blocks=1 00:17:51.893 00:17:51.893 ' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:51.893 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75066 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75066 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75066 ']' 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.894 09:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:51.894 [2024-12-05 09:49:39.438926] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:51.894 [2024-12-05 09:49:39.439194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75066 ] 00:17:52.155 [2024-12-05 09:49:39.599560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.155 [2024-12-05 09:49:39.701544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.155 [2024-12-05 09:49:39.701752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.155 [2024-12-05 09:49:39.701820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.727 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.727 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:52.727 09:49:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:52.727 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:52.727 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:52.727 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:52.727 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:52.727 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:52.987 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:52.987 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:52.987 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:52.987 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:52.987 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:52.987 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:52.987 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:52.987 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:53.248 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:53.248 { 00:17:53.248 "name": "nvme0n1", 00:17:53.248 "aliases": [ 00:17:53.248 "7e69338c-0fe2-4743-92a8-de472bfe3080" 00:17:53.248 ], 00:17:53.248 "product_name": "NVMe disk", 00:17:53.248 "block_size": 4096, 00:17:53.248 "num_blocks": 1310720, 00:17:53.248 "uuid": "7e69338c-0fe2-4743-92a8-de472bfe3080", 00:17:53.248 "numa_id": -1, 00:17:53.248 "assigned_rate_limits": { 00:17:53.248 "rw_ios_per_sec": 0, 00:17:53.248 "rw_mbytes_per_sec": 0, 00:17:53.248 "r_mbytes_per_sec": 0, 00:17:53.248 "w_mbytes_per_sec": 0 00:17:53.248 }, 00:17:53.248 "claimed": false, 00:17:53.248 "zoned": false, 00:17:53.248 "supported_io_types": { 00:17:53.248 "read": true, 00:17:53.248 "write": true, 00:17:53.248 "unmap": true, 00:17:53.248 "flush": true, 00:17:53.248 "reset": true, 00:17:53.248 "nvme_admin": true, 00:17:53.248 "nvme_io": true, 00:17:53.248 "nvme_io_md": false, 00:17:53.248 "write_zeroes": true, 00:17:53.248 "zcopy": false, 00:17:53.248 "get_zone_info": false, 00:17:53.248 "zone_management": false, 00:17:53.248 "zone_append": false, 00:17:53.248 "compare": true, 00:17:53.248 "compare_and_write": false, 00:17:53.248 "abort": true, 00:17:53.248 "seek_hole": false, 00:17:53.248 "seek_data": false, 00:17:53.248 "copy": true, 00:17:53.248 "nvme_iov_md": false 00:17:53.248 }, 00:17:53.249 "driver_specific": { 00:17:53.249 "nvme": [ 00:17:53.249 { 00:17:53.249 "pci_address": "0000:00:11.0", 00:17:53.249 "trid": { 00:17:53.249 "trtype": "PCIe", 00:17:53.249 "traddr": "0000:00:11.0" 00:17:53.249 }, 00:17:53.249 "ctrlr_data": { 00:17:53.249 "cntlid": 0, 00:17:53.249 "vendor_id": "0x1b36", 00:17:53.249 "model_number": "QEMU NVMe Ctrl", 00:17:53.249 "serial_number": "12341", 00:17:53.249 "firmware_revision": "8.0.0", 00:17:53.249 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:53.249 "oacs": { 00:17:53.249 "security": 0, 00:17:53.249 "format": 1, 00:17:53.249 "firmware": 0, 00:17:53.249 "ns_manage": 1 00:17:53.249 }, 00:17:53.249 "multi_ctrlr": false, 00:17:53.249 "ana_reporting": false 00:17:53.249 }, 00:17:53.249 "vs": { 00:17:53.249 "nvme_version": "1.4" 00:17:53.249 }, 00:17:53.249 "ns_data": { 00:17:53.249 "id": 1, 00:17:53.249 "can_share": false 00:17:53.249 } 00:17:53.249 } 00:17:53.249 ], 00:17:53.249 "mp_policy": "active_passive" 00:17:53.249 } 00:17:53.249 } 00:17:53.249 ]' 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:53.249 09:49:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:53.510 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:53.510 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:53.771 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=778a3524-37cb-4bc7-914c-43e6f4d9aa46 00:17:53.771 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 778a3524-37cb-4bc7-914c-43e6f4d9aa46 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.032 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.293 { 00:17:54.293 "name": "8ae51d72-2979-492e-aeb2-2873324c1cf6", 00:17:54.293 "aliases": [ 00:17:54.293 "lvs/nvme0n1p0" 00:17:54.293 ], 00:17:54.293 "product_name": "Logical Volume", 00:17:54.293 "block_size": 4096, 00:17:54.293 "num_blocks": 26476544, 00:17:54.293 "uuid": "8ae51d72-2979-492e-aeb2-2873324c1cf6", 00:17:54.293 "assigned_rate_limits": { 00:17:54.293 "rw_ios_per_sec": 0, 00:17:54.293 "rw_mbytes_per_sec": 0, 00:17:54.293 "r_mbytes_per_sec": 0, 00:17:54.293 "w_mbytes_per_sec": 0 00:17:54.293 }, 00:17:54.293 "claimed": false, 00:17:54.293 "zoned": false, 00:17:54.293 "supported_io_types": { 00:17:54.293 "read": true, 00:17:54.293 "write": true, 00:17:54.293 "unmap": true, 00:17:54.293 "flush": false, 00:17:54.293 "reset": true, 00:17:54.293 "nvme_admin": false, 00:17:54.293 "nvme_io": false, 00:17:54.293 "nvme_io_md": false, 00:17:54.293 "write_zeroes": true, 00:17:54.293 "zcopy": false, 00:17:54.293 "get_zone_info": false, 00:17:54.293 "zone_management": false, 00:17:54.293 "zone_append": false, 00:17:54.293 "compare": false, 00:17:54.293 "compare_and_write": false, 00:17:54.293 "abort": false, 00:17:54.293 "seek_hole": true, 00:17:54.293 "seek_data": true, 00:17:54.293 "copy": false, 00:17:54.293 "nvme_iov_md": false 00:17:54.293 }, 00:17:54.293 "driver_specific": { 00:17:54.293 "lvol": { 00:17:54.293 "lvol_store_uuid": "778a3524-37cb-4bc7-914c-43e6f4d9aa46", 00:17:54.293 "base_bdev": "nvme0n1", 00:17:54.293 "thin_provision": true, 00:17:54.293 "num_allocated_clusters": 0, 00:17:54.293 "snapshot": false, 00:17:54.293 "clone": false, 00:17:54.293 "esnap_clone": false 00:17:54.293 } 00:17:54.293 } 00:17:54.293 } 00:17:54.293 ]' 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:54.293 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:54.555 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:54.555 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:54.555 09:49:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.555 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.555 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.555 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.555 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.555 09:49:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.816 { 00:17:54.816 "name": "8ae51d72-2979-492e-aeb2-2873324c1cf6", 00:17:54.816 "aliases": [ 00:17:54.816 "lvs/nvme0n1p0" 00:17:54.816 ], 00:17:54.816 "product_name": "Logical Volume", 00:17:54.816 "block_size": 4096, 00:17:54.816 "num_blocks": 26476544, 00:17:54.816 "uuid": "8ae51d72-2979-492e-aeb2-2873324c1cf6", 00:17:54.816 "assigned_rate_limits": { 00:17:54.816 "rw_ios_per_sec": 0, 00:17:54.816 "rw_mbytes_per_sec": 0, 00:17:54.816 "r_mbytes_per_sec": 0, 00:17:54.816 "w_mbytes_per_sec": 0 00:17:54.816 }, 00:17:54.816 "claimed": false, 00:17:54.816 "zoned": false, 00:17:54.816 "supported_io_types": { 00:17:54.816 "read": true, 00:17:54.816 "write": true, 00:17:54.816 "unmap": true, 00:17:54.816 "flush": false, 00:17:54.816 "reset": true, 00:17:54.816 "nvme_admin": false, 00:17:54.816 "nvme_io": false, 00:17:54.816 "nvme_io_md": false, 00:17:54.816 "write_zeroes": true, 00:17:54.816 "zcopy": false, 00:17:54.816 "get_zone_info": false, 00:17:54.816 "zone_management": false, 00:17:54.816 "zone_append": false, 00:17:54.816 "compare": false, 00:17:54.816 "compare_and_write": false, 00:17:54.816 "abort": false, 00:17:54.816 "seek_hole": true, 00:17:54.816 "seek_data": true, 00:17:54.816 "copy": false, 00:17:54.816 "nvme_iov_md": false 00:17:54.816 }, 00:17:54.816 "driver_specific": { 00:17:54.816 "lvol": { 00:17:54.816 "lvol_store_uuid": "778a3524-37cb-4bc7-914c-43e6f4d9aa46", 00:17:54.816 "base_bdev": "nvme0n1", 00:17:54.816 "thin_provision": true, 00:17:54.816 "num_allocated_clusters": 0, 00:17:54.816 "snapshot": false, 00:17:54.816 "clone": false, 00:17:54.816 "esnap_clone": false 00:17:54.816 } 00:17:54.816 } 00:17:54.816 } 00:17:54.816 ]' 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:54.816 09:49:42 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:55.078 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:55.078 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ae51d72-2979-492e-aeb2-2873324c1cf6 00:17:55.340 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:55.340 { 00:17:55.340 "name": "8ae51d72-2979-492e-aeb2-2873324c1cf6", 00:17:55.340 "aliases": [ 00:17:55.340 "lvs/nvme0n1p0" 00:17:55.340 ], 00:17:55.340 "product_name": "Logical Volume", 00:17:55.340 "block_size": 4096, 00:17:55.340 "num_blocks": 26476544, 00:17:55.340 "uuid": "8ae51d72-2979-492e-aeb2-2873324c1cf6", 00:17:55.340 "assigned_rate_limits": { 00:17:55.340 "rw_ios_per_sec": 0, 00:17:55.340 "rw_mbytes_per_sec": 0, 00:17:55.340 "r_mbytes_per_sec": 0, 00:17:55.340 "w_mbytes_per_sec": 0 00:17:55.340 }, 00:17:55.340 "claimed": false, 00:17:55.340 "zoned": false, 00:17:55.340 "supported_io_types": { 00:17:55.340 "read": true, 00:17:55.340 "write": true, 00:17:55.340 "unmap": true, 00:17:55.340 "flush": false, 00:17:55.340 "reset": true, 00:17:55.340 "nvme_admin": false, 00:17:55.340 "nvme_io": false, 00:17:55.340 "nvme_io_md": false, 00:17:55.340 "write_zeroes": true, 00:17:55.340 "zcopy": false, 00:17:55.340 "get_zone_info": false, 00:17:55.340 "zone_management": false, 00:17:55.340 "zone_append": false, 00:17:55.340 "compare": false, 00:17:55.340 "compare_and_write": false, 00:17:55.340 "abort": false, 00:17:55.341 "seek_hole": true, 00:17:55.341 "seek_data": true, 00:17:55.341 "copy": false, 00:17:55.341 "nvme_iov_md": false 00:17:55.341 }, 00:17:55.341 "driver_specific": { 00:17:55.341 "lvol": { 00:17:55.341 "lvol_store_uuid": "778a3524-37cb-4bc7-914c-43e6f4d9aa46", 00:17:55.341 "base_bdev": "nvme0n1", 00:17:55.341 "thin_provision": true, 00:17:55.341 "num_allocated_clusters": 0, 00:17:55.341 "snapshot": false, 00:17:55.341 "clone": false, 00:17:55.341 "esnap_clone": false 00:17:55.341 } 00:17:55.341 } 00:17:55.341 } 00:17:55.341 ]' 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:55.341 09:49:42 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8ae51d72-2979-492e-aeb2-2873324c1cf6 -c nvc0n1p0 --l2p_dram_limit 60 00:17:55.341 [2024-12-05 09:49:42.963911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.341 [2024-12-05 09:49:42.963948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.341 [2024-12-05 09:49:42.963960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:55.341 [2024-12-05 09:49:42.963967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.341 [2024-12-05 09:49:42.964020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.341 [2024-12-05 09:49:42.964029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.341 [2024-12-05 09:49:42.964039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:55.341 [2024-12-05 09:49:42.964045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.341 [2024-12-05 09:49:42.964079] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.341 [2024-12-05 09:49:42.964681] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.341 [2024-12-05 09:49:42.964704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.341 [2024-12-05 09:49:42.964711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.341 [2024-12-05 09:49:42.964720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:17:55.341 [2024-12-05 09:49:42.964725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.341 [2024-12-05 09:49:42.964843] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f585d394-884a-48fe-8e35-2ba3bc74ab0a 00:17:55.341 [2024-12-05 09:49:42.965922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.341 [2024-12-05 09:49:42.965950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:55.341 [2024-12-05 09:49:42.965958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:55.341 [2024-12-05 09:49:42.965965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.603 [2024-12-05 09:49:42.971097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.603 [2024-12-05 09:49:42.971220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.603 [2024-12-05 09:49:42.971232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.078 ms 00:17:55.603 [2024-12-05 09:49:42.971240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.603 [2024-12-05 09:49:42.971328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.603 [2024-12-05 09:49:42.971336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.604 [2024-12-05 09:49:42.971343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:55.604 [2024-12-05 09:49:42.971352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.604 [2024-12-05 09:49:42.971401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.604 [2024-12-05 09:49:42.971410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.604 [2024-12-05 09:49:42.971417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:55.604 [2024-12-05 09:49:42.971423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.604 [2024-12-05 09:49:42.971450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.604 [2024-12-05 09:49:42.974321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.604 [2024-12-05 09:49:42.974418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.604 [2024-12-05 09:49:42.974434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.874 ms 00:17:55.604 [2024-12-05 09:49:42.974442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.604 [2024-12-05 09:49:42.974478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.604 [2024-12-05 09:49:42.974485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.604 [2024-12-05 09:49:42.974493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:55.604 [2024-12-05 09:49:42.974498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.604 [2024-12-05 09:49:42.974539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:55.604 [2024-12-05 09:49:42.974659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:55.604 [2024-12-05 09:49:42.974672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.604 [2024-12-05 09:49:42.974680] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:55.604 [2024-12-05 09:49:42.974689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.604 [2024-12-05 09:49:42.974696] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.604 [2024-12-05 09:49:42.974704] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:55.604 [2024-12-05 09:49:42.974710] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.604 [2024-12-05 09:49:42.974717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:55.604 [2024-12-05 09:49:42.974722] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:55.604 [2024-12-05 09:49:42.974730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.604 [2024-12-05 09:49:42.974737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.604 [2024-12-05 09:49:42.974744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:55.604 [2024-12-05 09:49:42.974750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.604 [2024-12-05 09:49:42.974821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.604 [2024-12-05 09:49:42.974827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.604 [2024-12-05 09:49:42.974834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:55.604 [2024-12-05 09:49:42.974839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.604 [2024-12-05 09:49:42.974931] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.604 [2024-12-05 09:49:42.974938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.604 [2024-12-05 09:49:42.974947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.604 [2024-12-05 09:49:42.974952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.604 [2024-12-05 09:49:42.974960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.604 [2024-12-05 09:49:42.974965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.604 [2024-12-05 09:49:42.974974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:55.604 [2024-12-05 09:49:42.974979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.604 [2024-12-05 09:49:42.974987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:55.604 [2024-12-05 09:49:42.974992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.604 [2024-12-05 09:49:42.974998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.604 [2024-12-05 09:49:42.975003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:55.604 [2024-12-05 09:49:42.975010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.604 [2024-12-05 09:49:42.975015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.604 [2024-12-05 09:49:42.975021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:55.604 [2024-12-05 09:49:42.975026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.604 [2024-12-05 09:49:42.975039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:55.604 [2024-12-05 09:49:42.975045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.604 [2024-12-05 09:49:42.975057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.604 [2024-12-05 09:49:42.975067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.604 [2024-12-05 09:49:42.975072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.604 [2024-12-05 09:49:42.975084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.604 [2024-12-05 09:49:42.975090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.604 [2024-12-05 09:49:42.975101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.604 [2024-12-05 09:49:42.975106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.604 [2024-12-05 09:49:42.975117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.604 [2024-12-05 09:49:42.975124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.604 [2024-12-05 09:49:42.975146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.604 [2024-12-05 09:49:42.975151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:55.604 [2024-12-05 09:49:42.975157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.604 [2024-12-05 09:49:42.975162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:55.604 [2024-12-05 09:49:42.975170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:55.604 [2024-12-05 09:49:42.975175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:55.604 [2024-12-05 09:49:42.975186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:55.604 [2024-12-05 09:49:42.975193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975198] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.604 [2024-12-05 09:49:42.975205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.604 [2024-12-05 09:49:42.975210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.604 [2024-12-05 09:49:42.975225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.604 [2024-12-05 09:49:42.975231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.604 [2024-12-05 09:49:42.975239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.604 [2024-12-05 09:49:42.975244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.604 [2024-12-05 09:49:42.975250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.604 [2024-12-05 09:49:42.975255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.604 [2024-12-05 09:49:42.975261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.604 [2024-12-05 09:49:42.975268] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.604 [2024-12-05 09:49:42.975276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.604 [2024-12-05 09:49:42.975282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:55.604 [2024-12-05 09:49:42.975288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:55.604 [2024-12-05 09:49:42.975294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:55.604 [2024-12-05 09:49:42.975301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:55.604 [2024-12-05 09:49:42.975306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:55.604 [2024-12-05 09:49:42.975313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:55.604 [2024-12-05 09:49:42.975318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:55.604 [2024-12-05 09:49:42.975324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:55.604 [2024-12-05 09:49:42.975330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:55.604 [2024-12-05 09:49:42.975338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:55.604 [2024-12-05 09:49:42.975343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:55.605 [2024-12-05 09:49:42.975349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:55.605 [2024-12-05 09:49:42.975355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:55.605 [2024-12-05 09:49:42.975361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:55.605 [2024-12-05 09:49:42.975367] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.605 [2024-12-05 09:49:42.975376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.605 [2024-12-05 09:49:42.975383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.605 [2024-12-05 09:49:42.975390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.605 [2024-12-05 09:49:42.975396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.605 [2024-12-05 09:49:42.975403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.605 [2024-12-05 09:49:42.975409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.605 [2024-12-05 09:49:42.975415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.605 [2024-12-05 09:49:42.975421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:17:55.605 [2024-12-05 09:49:42.975428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.605 [2024-12-05 09:49:42.975493] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:55.605 [2024-12-05 09:49:42.975506] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:58.141 [2024-12-05 09:49:45.305256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.305475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:58.141 [2024-12-05 09:49:45.305496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2329.753 ms 00:17:58.141 [2024-12-05 09:49:45.305506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.330899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.330944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.141 [2024-12-05 09:49:45.330957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.174 ms 00:17:58.141 [2024-12-05 09:49:45.330967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.331095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.331108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.141 [2024-12-05 09:49:45.331117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:58.141 [2024-12-05 09:49:45.331127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.384207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.384248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.141 [2024-12-05 09:49:45.384264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.028 ms 00:17:58.141 [2024-12-05 09:49:45.384275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.384315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.384326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.141 [2024-12-05 09:49:45.384334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.141 [2024-12-05 09:49:45.384343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.384718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.384737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.141 [2024-12-05 09:49:45.384746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:58.141 [2024-12-05 09:49:45.384757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.384873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.384884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.141 [2024-12-05 09:49:45.384892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:58.141 [2024-12-05 09:49:45.384902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.399198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.399334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.141 [2024-12-05 09:49:45.399350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.270 ms 00:17:58.141 [2024-12-05 09:49:45.399360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.410772] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:58.141 [2024-12-05 09:49:45.425150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.425181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.141 [2024-12-05 09:49:45.425196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.699 ms 00:17:58.141 [2024-12-05 09:49:45.425204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.472684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.472840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:58.141 [2024-12-05 09:49:45.472863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.448 ms 00:17:58.141 [2024-12-05 09:49:45.472871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.473051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.473061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.141 [2024-12-05 09:49:45.473074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:17:58.141 [2024-12-05 09:49:45.473081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.496012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.496047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:58.141 [2024-12-05 09:49:45.496060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:17:58.141 [2024-12-05 09:49:45.496067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.518472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.518503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:58.141 [2024-12-05 09:49:45.518529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.363 ms 00:17:58.141 [2024-12-05 09:49:45.518536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.519110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.519128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.141 [2024-12-05 09:49:45.519138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:17:58.141 [2024-12-05 09:49:45.519145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.583252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.583285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:58.141 [2024-12-05 09:49:45.583301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.064 ms 00:17:58.141 [2024-12-05 09:49:45.583310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.141 [2024-12-05 09:49:45.606912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.141 [2024-12-05 09:49:45.606944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:58.141 [2024-12-05 09:49:45.606956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.532 ms 00:17:58.142 [2024-12-05 09:49:45.606965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.142 [2024-12-05 09:49:45.629766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.142 [2024-12-05 09:49:45.629881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:58.142 [2024-12-05 09:49:45.629899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.768 ms 00:17:58.142 [2024-12-05 09:49:45.629906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.142 [2024-12-05 09:49:45.653053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.142 [2024-12-05 09:49:45.653169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.142 [2024-12-05 09:49:45.653187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.119 ms 00:17:58.142 [2024-12-05 09:49:45.653194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.142 [2024-12-05 09:49:45.653228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.142 [2024-12-05 09:49:45.653236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.142 [2024-12-05 09:49:45.653251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:58.142 [2024-12-05 09:49:45.653258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.142 [2024-12-05 09:49:45.653340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.142 [2024-12-05 09:49:45.653349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.142 [2024-12-05 09:49:45.653359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:58.142 [2024-12-05 09:49:45.653367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.142 [2024-12-05 09:49:45.654360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2690.043 ms, result 0 00:17:58.142 { 00:17:58.142 "name": "ftl0", 00:17:58.142 "uuid": "f585d394-884a-48fe-8e35-2ba3bc74ab0a" 00:17:58.142 } 00:17:58.142 09:49:45 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:58.142 09:49:45 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:58.142 09:49:45 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.142 09:49:45 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:58.142 09:49:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.142 09:49:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.142 09:49:45 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:58.399 09:49:45 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:58.657 [ 00:17:58.657 { 00:17:58.657 "name": "ftl0", 00:17:58.657 "aliases": [ 00:17:58.657 "f585d394-884a-48fe-8e35-2ba3bc74ab0a" 00:17:58.657 ], 00:17:58.657 "product_name": "FTL disk", 00:17:58.657 "block_size": 4096, 00:17:58.657 "num_blocks": 20971520, 00:17:58.657 "uuid": "f585d394-884a-48fe-8e35-2ba3bc74ab0a", 00:17:58.657 "assigned_rate_limits": { 00:17:58.657 "rw_ios_per_sec": 0, 00:17:58.657 "rw_mbytes_per_sec": 0, 00:17:58.657 "r_mbytes_per_sec": 0, 00:17:58.657 "w_mbytes_per_sec": 0 00:17:58.657 }, 00:17:58.657 "claimed": false, 00:17:58.657 "zoned": false, 00:17:58.657 "supported_io_types": { 00:17:58.657 "read": true, 00:17:58.657 "write": true, 00:17:58.657 "unmap": true, 00:17:58.657 "flush": true, 00:17:58.657 "reset": false, 00:17:58.657 "nvme_admin": false, 00:17:58.657 "nvme_io": false, 00:17:58.657 "nvme_io_md": false, 00:17:58.657 "write_zeroes": true, 00:17:58.657 "zcopy": false, 00:17:58.657 "get_zone_info": false, 00:17:58.657 "zone_management": false, 00:17:58.657 "zone_append": false, 00:17:58.657 "compare": false, 00:17:58.657 "compare_and_write": false, 00:17:58.657 "abort": false, 00:17:58.657 "seek_hole": false, 00:17:58.657 "seek_data": false, 00:17:58.657 "copy": false, 00:17:58.657 "nvme_iov_md": false 00:17:58.657 }, 00:17:58.657 "driver_specific": { 00:17:58.657 "ftl": { 00:17:58.657 "base_bdev": "8ae51d72-2979-492e-aeb2-2873324c1cf6", 00:17:58.657 "cache": "nvc0n1p0" 00:17:58.657 } 00:17:58.657 } 00:17:58.657 } 00:17:58.657 ] 00:17:58.657 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:58.657 09:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:58.657 09:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:58.657 09:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:58.657 09:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:58.915 [2024-12-05 09:49:46.455202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.915 [2024-12-05 09:49:46.455240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:58.915 [2024-12-05 09:49:46.455250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.915 [2024-12-05 09:49:46.455258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.915 [2024-12-05 09:49:46.455288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.915 [2024-12-05 09:49:46.457419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.915 [2024-12-05 09:49:46.457443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:58.915 [2024-12-05 09:49:46.457453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.115 ms 00:17:58.915 [2024-12-05 09:49:46.457460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.915 [2024-12-05 09:49:46.457886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.915 [2024-12-05 09:49:46.457899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:58.915 [2024-12-05 09:49:46.457908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:17:58.915 [2024-12-05 09:49:46.457913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.915 [2024-12-05 09:49:46.460351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.915 [2024-12-05 09:49:46.460370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:58.915 [2024-12-05 09:49:46.460379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.415 ms 00:17:58.915 [2024-12-05 09:49:46.460385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.915 [2024-12-05 09:49:46.465011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.915 [2024-12-05 09:49:46.465032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:58.915 [2024-12-05 09:49:46.465041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.601 ms 00:17:58.915 [2024-12-05 09:49:46.465046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.915 [2024-12-05 09:49:46.483584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.915 [2024-12-05 09:49:46.483611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:58.915 [2024-12-05 09:49:46.483631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.478 ms 00:17:58.915 [2024-12-05 09:49:46.483637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.915 [2024-12-05 09:49:46.495759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.915 [2024-12-05 09:49:46.495788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:58.916 [2024-12-05 09:49:46.495802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.082 ms 00:17:58.916 [2024-12-05 09:49:46.495808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.916 [2024-12-05 09:49:46.495970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.916 [2024-12-05 09:49:46.495979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:58.916 [2024-12-05 09:49:46.495988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:58.916 [2024-12-05 09:49:46.495994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.916 [2024-12-05 09:49:46.513692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.916 [2024-12-05 09:49:46.513716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:58.916 [2024-12-05 09:49:46.513726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.677 ms 00:17:58.916 [2024-12-05 09:49:46.513732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.916 [2024-12-05 09:49:46.531014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.916 [2024-12-05 09:49:46.531113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:58.916 [2024-12-05 09:49:46.531127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.248 ms 00:17:58.916 [2024-12-05 09:49:46.531132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.175 [2024-12-05 09:49:46.548055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.175 [2024-12-05 09:49:46.548086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.175 [2024-12-05 09:49:46.548096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.888 ms 00:17:59.175 [2024-12-05 09:49:46.548102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.175 [2024-12-05 09:49:46.564826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.175 [2024-12-05 09:49:46.564851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.175 [2024-12-05 09:49:46.564860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.634 ms 00:17:59.175 [2024-12-05 09:49:46.564866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.175 [2024-12-05 09:49:46.564898] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.175 [2024-12-05 09:49:46.564910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.564998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.175 [2024-12-05 09:49:46.565106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.176 [2024-12-05 09:49:46.565621] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.176 [2024-12-05 09:49:46.565629] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f585d394-884a-48fe-8e35-2ba3bc74ab0a 00:17:59.176 [2024-12-05 09:49:46.565635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.176 [2024-12-05 09:49:46.565643] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.176 [2024-12-05 09:49:46.565648] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.176 [2024-12-05 09:49:46.565657] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.176 [2024-12-05 09:49:46.565662] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.176 [2024-12-05 09:49:46.565670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.176 [2024-12-05 09:49:46.565676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.176 [2024-12-05 09:49:46.565682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.176 [2024-12-05 09:49:46.565687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.176 [2024-12-05 09:49:46.565694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.176 [2024-12-05 09:49:46.565700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.176 [2024-12-05 09:49:46.565707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:17:59.176 [2024-12-05 09:49:46.565713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.176 [2024-12-05 09:49:46.575441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-12-05 09:49:46.575467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.177 [2024-12-05 09:49:46.575475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.696 ms 00:17:59.177 [2024-12-05 09:49:46.575481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.575772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-12-05 09:49:46.575780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.177 [2024-12-05 09:49:46.575788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:59.177 [2024-12-05 09:49:46.575793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.610256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.610283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.177 [2024-12-05 09:49:46.610293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.610299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.610351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.610358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.177 [2024-12-05 09:49:46.610365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.610371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.610440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.610450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.177 [2024-12-05 09:49:46.610457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.610463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.610487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.610493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.177 [2024-12-05 09:49:46.610500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.610506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.673773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.673808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.177 [2024-12-05 09:49:46.673818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.673824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.722506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.722544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.177 [2024-12-05 09:49:46.722554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.722560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.722634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.722642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.177 [2024-12-05 09:49:46.722652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.722657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.722711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.722718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.177 [2024-12-05 09:49:46.722726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.722731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.722813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.722821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.177 [2024-12-05 09:49:46.722828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.722836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.722887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.722894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:59.177 [2024-12-05 09:49:46.722901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.722907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.722950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.722956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.177 [2024-12-05 09:49:46.722964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.722970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.723012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.177 [2024-12-05 09:49:46.723019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.177 [2024-12-05 09:49:46.723026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.177 [2024-12-05 09:49:46.723032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-12-05 09:49:46.723166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.943 ms, result 0 00:17:59.177 true 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75066 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75066 ']' 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75066 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75066 00:17:59.177 killing process with pid 75066 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75066' 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75066 00:17:59.177 09:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75066 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:05.738 09:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:05.738 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:05.738 fio-3.35 00:18:05.738 Starting 1 thread 00:18:09.943 00:18:09.943 test: (groupid=0, jobs=1): err= 0: pid=75240: Thu Dec 5 09:49:57 2024 00:18:09.943 read: IOPS=1171, BW=77.8MiB/s (81.5MB/s)(255MiB/3273msec) 00:18:09.943 slat (nsec): min=2991, max=44094, avg=5205.43, stdev=2895.07 00:18:09.943 clat (usec): min=244, max=1353, avg=388.89, stdev=141.62 00:18:09.943 lat (usec): min=248, max=1358, avg=394.10, stdev=142.99 00:18:09.943 clat percentiles (usec): 00:18:09.943 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:18:09.943 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:18:09.943 | 70.00th=[ 392], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[ 611], 00:18:09.943 | 99.00th=[ 947], 99.50th=[ 1057], 99.90th=[ 1172], 99.95th=[ 1254], 00:18:09.943 | 99.99th=[ 1352] 00:18:09.943 write: IOPS=1181, BW=78.5MiB/s (82.3MB/s)(256MiB/3263msec); 0 zone resets 00:18:09.943 slat (nsec): min=13648, max=87828, avg=21666.46, stdev=6966.70 00:18:09.943 clat (usec): min=260, max=1938, avg=421.25, stdev=155.46 00:18:09.943 lat (usec): min=276, max=1972, avg=442.91, stdev=157.63 00:18:09.943 clat percentiles (usec): 00:18:09.943 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:18:09.943 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 347], 60.00th=[ 355], 00:18:09.943 | 70.00th=[ 412], 80.00th=[ 537], 90.00th=[ 603], 95.00th=[ 693], 00:18:09.943 | 99.00th=[ 1020], 99.50th=[ 1074], 99.90th=[ 1303], 99.95th=[ 1319], 00:18:09.943 | 99.99th=[ 1942] 00:18:09.943 bw ( KiB/s): min=59160, max=99008, per=100.00%, avg=83141.33, stdev=15964.03, samples=6 00:18:09.943 iops : min= 870, max= 1456, avg=1222.67, stdev=234.77, samples=6 00:18:09.943 lat (usec) : 250=0.01%, 500=78.10%, 750=17.91%, 1000=2.89% 00:18:09.943 lat (msec) : 2=1.09% 00:18:09.943 cpu : usr=99.24%, sys=0.06%, ctx=9, majf=0, minf=1169 00:18:09.943 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.943 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.943 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.943 00:18:09.943 Run status group 0 (all jobs): 00:18:09.943 READ: bw=77.8MiB/s (81.5MB/s), 77.8MiB/s-77.8MiB/s (81.5MB/s-81.5MB/s), io=255MiB (267MB), run=3273-3273msec 00:18:09.943 WRITE: bw=78.5MiB/s (82.3MB/s), 78.5MiB/s-78.5MiB/s (82.3MB/s-82.3MB/s), io=256MiB (269MB), run=3263-3263msec 00:18:11.324 ----------------------------------------------------- 00:18:11.324 Suppressions used: 00:18:11.324 count bytes template 00:18:11.324 1 5 /usr/src/fio/parse.c 00:18:11.324 1 8 libtcmalloc_minimal.so 00:18:11.324 1 904 libcrypto.so 00:18:11.324 ----------------------------------------------------- 00:18:11.324 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:11.324 09:49:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:11.629 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:11.629 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:11.629 fio-3.35 00:18:11.629 Starting 2 threads 00:18:43.711 00:18:43.711 first_half: (groupid=0, jobs=1): err= 0: pid=75332: Thu Dec 5 09:50:27 2024 00:18:43.711 read: IOPS=2374, BW=9497KiB/s (9725kB/s)(255MiB/27506msec) 00:18:43.711 slat (usec): min=3, max=101, avg= 5.17, stdev= 1.60 00:18:43.711 clat (usec): min=638, max=364586, avg=43050.69, stdev=32420.24 00:18:43.711 lat (usec): min=643, max=364591, avg=43055.86, stdev=32420.29 00:18:43.711 clat percentiles (msec): 00:18:43.711 | 1.00th=[ 17], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:18:43.711 | 30.00th=[ 31], 40.00th=[ 33], 50.00th=[ 35], 60.00th=[ 38], 00:18:43.711 | 70.00th=[ 41], 80.00th=[ 45], 90.00th=[ 59], 95.00th=[ 83], 00:18:43.711 | 99.00th=[ 224], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 338], 00:18:43.711 | 99.99th=[ 355] 00:18:43.711 write: IOPS=2642, BW=10.3MiB/s (10.8MB/s)(256MiB/24804msec); 0 zone resets 00:18:43.711 slat (usec): min=3, max=1077, avg= 6.70, stdev= 7.81 00:18:43.711 clat (usec): min=386, max=139254, avg=10794.47, stdev=14423.74 00:18:43.711 lat (usec): min=393, max=139259, avg=10801.17, stdev=14424.03 00:18:43.711 clat percentiles (usec): 00:18:43.711 | 1.00th=[ 701], 5.00th=[ 906], 10.00th=[ 1057], 20.00th=[ 2376], 00:18:43.711 | 30.00th=[ 4490], 40.00th=[ 5604], 50.00th=[ 6652], 60.00th=[ 9241], 00:18:43.711 | 70.00th=[ 10945], 80.00th=[ 13566], 90.00th=[ 20579], 95.00th=[ 35390], 00:18:43.711 | 99.00th=[ 64226], 99.50th=[101188], 99.90th=[121111], 99.95th=[130548], 00:18:43.711 | 99.99th=[137364] 00:18:43.711 bw ( KiB/s): min= 1040, max=39808, per=100.00%, avg=20965.04, stdev=12904.74, samples=25 00:18:43.711 iops : min= 260, max= 9952, avg=5241.24, stdev=3226.18, samples=25 00:18:43.711 lat (usec) : 500=0.01%, 750=0.83%, 1000=3.23% 00:18:43.711 lat (msec) : 2=5.20%, 4=4.80%, 10=18.61%, 20=12.86%, 50=45.37% 00:18:43.711 lat (msec) : 100=6.92%, 250=1.77%, 500=0.39% 00:18:43.711 cpu : usr=99.30%, sys=0.09%, ctx=46, majf=0, minf=5571 00:18:43.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:43.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.711 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:43.711 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:43.711 second_half: (groupid=0, jobs=1): err= 0: pid=75333: Thu Dec 5 09:50:27 2024 00:18:43.711 read: IOPS=2359, BW=9438KiB/s (9665kB/s)(255MiB/27708msec) 00:18:43.711 slat (nsec): min=3107, max=71694, avg=4655.09, stdev=1494.77 00:18:43.711 clat (usec): min=576, max=415288, avg=42172.65, stdev=36062.03 00:18:43.711 lat (usec): min=582, max=415294, avg=42177.31, stdev=36062.26 00:18:43.711 clat percentiles (msec): 00:18:43.711 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:18:43.711 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 37], 00:18:43.711 | 70.00th=[ 40], 80.00th=[ 44], 90.00th=[ 54], 95.00th=[ 69], 00:18:43.712 | 99.00th=[ 253], 99.50th=[ 296], 99.90th=[ 355], 99.95th=[ 380], 00:18:43.712 | 99.99th=[ 409] 00:18:43.712 write: IOPS=2567, BW=10.0MiB/s (10.5MB/s)(256MiB/25525msec); 0 zone resets 00:18:43.712 slat (usec): min=3, max=1024, avg= 6.75, stdev= 8.99 00:18:43.712 clat (usec): min=379, max=138682, avg=12024.94, stdev=17297.18 00:18:43.712 lat (usec): min=386, max=138689, avg=12031.68, stdev=17297.62 00:18:43.712 clat percentiles (usec): 00:18:43.712 | 1.00th=[ 701], 5.00th=[ 930], 10.00th=[ 1254], 20.00th=[ 2573], 00:18:43.712 | 30.00th=[ 3884], 40.00th=[ 4883], 50.00th=[ 5932], 60.00th=[ 8029], 00:18:43.712 | 70.00th=[ 10421], 80.00th=[ 13829], 90.00th=[ 25035], 95.00th=[ 57410], 00:18:43.712 | 99.00th=[ 81265], 99.50th=[106431], 99.90th=[126354], 99.95th=[132645], 00:18:43.712 | 99.99th=[137364] 00:18:43.712 bw ( KiB/s): min= 2576, max=43120, per=100.00%, avg=20965.64, stdev=11666.24, samples=25 00:18:43.712 iops : min= 644, max=10780, avg=5241.36, stdev=2916.52, samples=25 00:18:43.712 lat (usec) : 500=0.01%, 750=0.78%, 1000=2.49% 00:18:43.712 lat (msec) : 2=4.22%, 4=8.00%, 10=19.39%, 20=10.40%, 50=45.28% 00:18:43.712 lat (msec) : 100=7.24%, 250=1.68%, 500=0.51% 00:18:43.712 cpu : usr=99.31%, sys=0.14%, ctx=41, majf=0, minf=5538 00:18:43.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:43.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.712 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:43.712 issued rwts: total=65379,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:43.712 00:18:43.712 Run status group 0 (all jobs): 00:18:43.712 READ: bw=18.4MiB/s (19.3MB/s), 9438KiB/s-9497KiB/s (9665kB/s-9725kB/s), io=511MiB (535MB), run=27506-27708msec 00:18:43.712 WRITE: bw=20.1MiB/s (21.0MB/s), 10.0MiB/s-10.3MiB/s (10.5MB/s-10.8MB/s), io=512MiB (537MB), run=24804-25525msec 00:18:43.712 ----------------------------------------------------- 00:18:43.712 Suppressions used: 00:18:43.712 count bytes template 00:18:43.712 2 10 /usr/src/fio/parse.c 00:18:43.712 3 288 /usr/src/fio/iolog.c 00:18:43.712 1 8 libtcmalloc_minimal.so 00:18:43.712 1 904 libcrypto.so 00:18:43.712 ----------------------------------------------------- 00:18:43.712 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.712 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:43.712 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:43.712 fio-3.35 00:18:43.712 Starting 1 thread 00:19:01.841 00:19:01.841 test: (groupid=0, jobs=1): err= 0: pid=75694: Thu Dec 5 09:50:48 2024 00:19:01.841 read: IOPS=6353, BW=24.8MiB/s (26.0MB/s)(255MiB/10263msec) 00:19:01.841 slat (nsec): min=3091, max=20955, avg=4776.56, stdev=1080.21 00:19:01.841 clat (usec): min=1277, max=43827, avg=20139.52, stdev=2672.70 00:19:01.841 lat (usec): min=1281, max=43832, avg=20144.30, stdev=2672.67 00:19:01.841 clat percentiles (usec): 00:19:01.841 | 1.00th=[15401], 5.00th=[16319], 10.00th=[17171], 20.00th=[18220], 00:19:01.841 | 30.00th=[19006], 40.00th=[19530], 50.00th=[19792], 60.00th=[20317], 00:19:01.841 | 70.00th=[20841], 80.00th=[21627], 90.00th=[23200], 95.00th=[24773], 00:19:01.841 | 99.00th=[28967], 99.50th=[30540], 99.90th=[37487], 99.95th=[39060], 00:19:01.841 | 99.99th=[43779] 00:19:01.841 write: IOPS=9641, BW=37.7MiB/s (39.5MB/s)(256MiB/6797msec); 0 zone resets 00:19:01.841 slat (usec): min=4, max=473, avg= 6.35, stdev= 4.90 00:19:01.841 clat (usec): min=703, max=81976, avg=13201.93, stdev=16652.97 00:19:01.841 lat (usec): min=731, max=81982, avg=13208.28, stdev=16652.96 00:19:01.841 clat percentiles (usec): 00:19:01.841 | 1.00th=[ 1287], 5.00th=[ 1549], 10.00th=[ 1713], 20.00th=[ 1975], 00:19:01.841 | 30.00th=[ 2311], 40.00th=[ 3294], 50.00th=[ 7832], 60.00th=[ 9241], 00:19:01.841 | 70.00th=[11207], 80.00th=[14091], 90.00th=[48497], 95.00th=[51643], 00:19:01.841 | 99.00th=[56886], 99.50th=[58459], 99.90th=[62653], 99.95th=[68682], 00:19:01.841 | 99.99th=[79168] 00:19:01.841 bw ( KiB/s): min=17944, max=62152, per=97.10%, avg=37449.14, stdev=10299.00, samples=14 00:19:01.841 iops : min= 4486, max=15538, avg=9362.29, stdev=2574.75, samples=14 00:19:01.841 lat (usec) : 750=0.01%, 1000=0.03% 00:19:01.842 lat (msec) : 2=10.49%, 4=10.08%, 10=11.49%, 20=36.32%, 50=27.92% 00:19:01.842 lat (msec) : 100=3.66% 00:19:01.842 cpu : usr=99.16%, sys=0.15%, ctx=29, majf=0, minf=5565 00:19:01.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:01.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.842 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:01.842 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:01.842 00:19:01.842 Run status group 0 (all jobs): 00:19:01.842 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=255MiB (267MB), run=10263-10263msec 00:19:01.842 WRITE: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=256MiB (268MB), run=6797-6797msec 00:19:02.783 ----------------------------------------------------- 00:19:02.783 Suppressions used: 00:19:02.783 count bytes template 00:19:02.783 1 5 /usr/src/fio/parse.c 00:19:02.783 2 192 /usr/src/fio/iolog.c 00:19:02.783 1 8 libtcmalloc_minimal.so 00:19:02.783 1 904 libcrypto.so 00:19:02.783 ----------------------------------------------------- 00:19:02.783 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.783 Remove shared memory files 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57149 /dev/shm/spdk_tgt_trace.pid73986 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:02.783 ************************************ 00:19:02.783 END TEST ftl_fio_basic 00:19:02.783 ************************************ 00:19:02.783 00:19:02.783 real 1m11.116s 00:19:02.783 user 2m38.692s 00:19:02.783 sys 0m2.977s 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.783 09:50:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.783 09:50:50 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:02.783 09:50:50 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:02.783 09:50:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.783 09:50:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:02.783 ************************************ 00:19:02.783 START TEST ftl_bdevperf 00:19:02.783 ************************************ 00:19:02.783 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:03.042 * Looking for test storage... 00:19:03.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.042 --rc genhtml_branch_coverage=1 00:19:03.042 --rc genhtml_function_coverage=1 00:19:03.042 --rc genhtml_legend=1 00:19:03.042 --rc geninfo_all_blocks=1 00:19:03.042 --rc geninfo_unexecuted_blocks=1 00:19:03.042 00:19:03.042 ' 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.042 --rc genhtml_branch_coverage=1 00:19:03.042 --rc genhtml_function_coverage=1 00:19:03.042 --rc genhtml_legend=1 00:19:03.042 --rc geninfo_all_blocks=1 00:19:03.042 --rc geninfo_unexecuted_blocks=1 00:19:03.042 00:19:03.042 ' 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.042 --rc genhtml_branch_coverage=1 00:19:03.042 --rc genhtml_function_coverage=1 00:19:03.042 --rc genhtml_legend=1 00:19:03.042 --rc geninfo_all_blocks=1 00:19:03.042 --rc geninfo_unexecuted_blocks=1 00:19:03.042 00:19:03.042 ' 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.042 --rc genhtml_branch_coverage=1 00:19:03.042 --rc genhtml_function_coverage=1 00:19:03.042 --rc genhtml_legend=1 00:19:03.042 --rc geninfo_all_blocks=1 00:19:03.042 --rc geninfo_unexecuted_blocks=1 00:19:03.042 00:19:03.042 ' 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:03.042 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75962 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75962 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75962 ']' 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.043 09:50:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:03.043 [2024-12-05 09:50:50.640447] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:03.043 [2024-12-05 09:50:50.640705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75962 ] 00:19:03.302 [2024-12-05 09:50:50.797702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.302 [2024-12-05 09:50:50.902570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.870 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.870 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:03.870 09:50:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:03.870 09:50:51 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:03.870 09:50:51 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:03.870 09:50:51 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:03.870 09:50:51 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:03.870 09:50:51 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:04.129 09:50:51 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:04.129 09:50:51 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:04.129 09:50:51 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:04.129 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:04.129 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:04.129 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:04.129 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:04.129 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:04.389 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:04.389 { 00:19:04.389 "name": "nvme0n1", 00:19:04.389 "aliases": [ 00:19:04.389 "ec7c50ce-18c7-41af-9d0d-34af72f33206" 00:19:04.389 ], 00:19:04.389 "product_name": "NVMe disk", 00:19:04.389 "block_size": 4096, 00:19:04.389 "num_blocks": 1310720, 00:19:04.389 "uuid": "ec7c50ce-18c7-41af-9d0d-34af72f33206", 00:19:04.389 "numa_id": -1, 00:19:04.389 "assigned_rate_limits": { 00:19:04.389 "rw_ios_per_sec": 0, 00:19:04.389 "rw_mbytes_per_sec": 0, 00:19:04.389 "r_mbytes_per_sec": 0, 00:19:04.389 "w_mbytes_per_sec": 0 00:19:04.389 }, 00:19:04.389 "claimed": true, 00:19:04.389 "claim_type": "read_many_write_one", 00:19:04.389 "zoned": false, 00:19:04.389 "supported_io_types": { 00:19:04.389 "read": true, 00:19:04.389 "write": true, 00:19:04.389 "unmap": true, 00:19:04.389 "flush": true, 00:19:04.389 "reset": true, 00:19:04.389 "nvme_admin": true, 00:19:04.389 "nvme_io": true, 00:19:04.389 "nvme_io_md": false, 00:19:04.389 "write_zeroes": true, 00:19:04.389 "zcopy": false, 00:19:04.389 "get_zone_info": false, 00:19:04.389 "zone_management": false, 00:19:04.389 "zone_append": false, 00:19:04.389 "compare": true, 00:19:04.389 "compare_and_write": false, 00:19:04.389 "abort": true, 00:19:04.389 "seek_hole": false, 00:19:04.389 "seek_data": false, 00:19:04.389 "copy": true, 00:19:04.389 "nvme_iov_md": false 00:19:04.389 }, 00:19:04.389 "driver_specific": { 00:19:04.389 "nvme": [ 00:19:04.389 { 00:19:04.389 "pci_address": "0000:00:11.0", 00:19:04.389 "trid": { 00:19:04.389 "trtype": "PCIe", 00:19:04.389 "traddr": "0000:00:11.0" 00:19:04.389 }, 00:19:04.389 "ctrlr_data": { 00:19:04.389 "cntlid": 0, 00:19:04.389 "vendor_id": "0x1b36", 00:19:04.389 "model_number": "QEMU NVMe Ctrl", 00:19:04.389 "serial_number": "12341", 00:19:04.389 "firmware_revision": "8.0.0", 00:19:04.389 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:04.389 "oacs": { 00:19:04.389 "security": 0, 00:19:04.389 "format": 1, 00:19:04.389 "firmware": 0, 00:19:04.389 "ns_manage": 1 00:19:04.389 }, 00:19:04.389 "multi_ctrlr": false, 00:19:04.389 "ana_reporting": false 00:19:04.389 }, 00:19:04.389 "vs": { 00:19:04.389 "nvme_version": "1.4" 00:19:04.389 }, 00:19:04.389 "ns_data": { 00:19:04.389 "id": 1, 00:19:04.389 "can_share": false 00:19:04.389 } 00:19:04.389 } 00:19:04.389 ], 00:19:04.389 "mp_policy": "active_passive" 00:19:04.389 } 00:19:04.389 } 00:19:04.389 ]' 00:19:04.389 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:04.389 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:04.389 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:04.389 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:04.389 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:04.390 09:50:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:04.390 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:04.390 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:04.390 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:04.390 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:04.390 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:04.650 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=778a3524-37cb-4bc7-914c-43e6f4d9aa46 00:19:04.650 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:04.650 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 778a3524-37cb-4bc7-914c-43e6f4d9aa46 00:19:04.911 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:05.172 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ee07d6d4-7bf2-4a30-b0e4-945491dd1bb0 00:19:05.172 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ee07d6d4-7bf2-4a30-b0e4-945491dd1bb0 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=e270f2ec-8d94-4263-8346-910888232459 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e270f2ec-8d94-4263-8346-910888232459 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=e270f2ec-8d94-4263-8346-910888232459 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size e270f2ec-8d94-4263-8346-910888232459 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e270f2ec-8d94-4263-8346-910888232459 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:05.433 09:50:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:05.434 09:50:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e270f2ec-8d94-4263-8346-910888232459 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:05.695 { 00:19:05.695 "name": "e270f2ec-8d94-4263-8346-910888232459", 00:19:05.695 "aliases": [ 00:19:05.695 "lvs/nvme0n1p0" 00:19:05.695 ], 00:19:05.695 "product_name": "Logical Volume", 00:19:05.695 "block_size": 4096, 00:19:05.695 "num_blocks": 26476544, 00:19:05.695 "uuid": "e270f2ec-8d94-4263-8346-910888232459", 00:19:05.695 "assigned_rate_limits": { 00:19:05.695 "rw_ios_per_sec": 0, 00:19:05.695 "rw_mbytes_per_sec": 0, 00:19:05.695 "r_mbytes_per_sec": 0, 00:19:05.695 "w_mbytes_per_sec": 0 00:19:05.695 }, 00:19:05.695 "claimed": false, 00:19:05.695 "zoned": false, 00:19:05.695 "supported_io_types": { 00:19:05.695 "read": true, 00:19:05.695 "write": true, 00:19:05.695 "unmap": true, 00:19:05.695 "flush": false, 00:19:05.695 "reset": true, 00:19:05.695 "nvme_admin": false, 00:19:05.695 "nvme_io": false, 00:19:05.695 "nvme_io_md": false, 00:19:05.695 "write_zeroes": true, 00:19:05.695 "zcopy": false, 00:19:05.695 "get_zone_info": false, 00:19:05.695 "zone_management": false, 00:19:05.695 "zone_append": false, 00:19:05.695 "compare": false, 00:19:05.695 "compare_and_write": false, 00:19:05.695 "abort": false, 00:19:05.695 "seek_hole": true, 00:19:05.695 "seek_data": true, 00:19:05.695 "copy": false, 00:19:05.695 "nvme_iov_md": false 00:19:05.695 }, 00:19:05.695 "driver_specific": { 00:19:05.695 "lvol": { 00:19:05.695 "lvol_store_uuid": "ee07d6d4-7bf2-4a30-b0e4-945491dd1bb0", 00:19:05.695 "base_bdev": "nvme0n1", 00:19:05.695 "thin_provision": true, 00:19:05.695 "num_allocated_clusters": 0, 00:19:05.695 "snapshot": false, 00:19:05.695 "clone": false, 00:19:05.695 "esnap_clone": false 00:19:05.695 } 00:19:05.695 } 00:19:05.695 } 00:19:05.695 ]' 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:05.695 09:50:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:05.956 09:50:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:05.956 09:50:53 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:05.956 09:50:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size e270f2ec-8d94-4263-8346-910888232459 00:19:05.956 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e270f2ec-8d94-4263-8346-910888232459 00:19:05.956 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:05.956 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:05.956 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:05.956 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e270f2ec-8d94-4263-8346-910888232459 00:19:06.217 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:06.218 { 00:19:06.218 "name": "e270f2ec-8d94-4263-8346-910888232459", 00:19:06.218 "aliases": [ 00:19:06.218 "lvs/nvme0n1p0" 00:19:06.218 ], 00:19:06.218 "product_name": "Logical Volume", 00:19:06.218 "block_size": 4096, 00:19:06.218 "num_blocks": 26476544, 00:19:06.218 "uuid": "e270f2ec-8d94-4263-8346-910888232459", 00:19:06.218 "assigned_rate_limits": { 00:19:06.218 "rw_ios_per_sec": 0, 00:19:06.218 "rw_mbytes_per_sec": 0, 00:19:06.218 "r_mbytes_per_sec": 0, 00:19:06.218 "w_mbytes_per_sec": 0 00:19:06.218 }, 00:19:06.218 "claimed": false, 00:19:06.218 "zoned": false, 00:19:06.218 "supported_io_types": { 00:19:06.218 "read": true, 00:19:06.218 "write": true, 00:19:06.218 "unmap": true, 00:19:06.218 "flush": false, 00:19:06.218 "reset": true, 00:19:06.218 "nvme_admin": false, 00:19:06.218 "nvme_io": false, 00:19:06.218 "nvme_io_md": false, 00:19:06.218 "write_zeroes": true, 00:19:06.218 "zcopy": false, 00:19:06.218 "get_zone_info": false, 00:19:06.218 "zone_management": false, 00:19:06.218 "zone_append": false, 00:19:06.218 "compare": false, 00:19:06.218 "compare_and_write": false, 00:19:06.218 "abort": false, 00:19:06.218 "seek_hole": true, 00:19:06.218 "seek_data": true, 00:19:06.218 "copy": false, 00:19:06.218 "nvme_iov_md": false 00:19:06.218 }, 00:19:06.218 "driver_specific": { 00:19:06.218 "lvol": { 00:19:06.218 "lvol_store_uuid": "ee07d6d4-7bf2-4a30-b0e4-945491dd1bb0", 00:19:06.218 "base_bdev": "nvme0n1", 00:19:06.218 "thin_provision": true, 00:19:06.218 "num_allocated_clusters": 0, 00:19:06.218 "snapshot": false, 00:19:06.218 "clone": false, 00:19:06.218 "esnap_clone": false 00:19:06.218 } 00:19:06.218 } 00:19:06.218 } 00:19:06.218 ]' 00:19:06.218 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:06.218 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:06.218 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:06.218 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:06.218 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:06.218 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:06.218 09:50:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:06.218 09:50:53 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:06.479 09:50:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:06.479 09:50:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size e270f2ec-8d94-4263-8346-910888232459 00:19:06.479 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e270f2ec-8d94-4263-8346-910888232459 00:19:06.479 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:06.479 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:06.479 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:06.479 09:50:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e270f2ec-8d94-4263-8346-910888232459 00:19:06.759 09:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:06.759 { 00:19:06.759 "name": "e270f2ec-8d94-4263-8346-910888232459", 00:19:06.759 "aliases": [ 00:19:06.759 "lvs/nvme0n1p0" 00:19:06.759 ], 00:19:06.759 "product_name": "Logical Volume", 00:19:06.759 "block_size": 4096, 00:19:06.759 "num_blocks": 26476544, 00:19:06.759 "uuid": "e270f2ec-8d94-4263-8346-910888232459", 00:19:06.759 "assigned_rate_limits": { 00:19:06.759 "rw_ios_per_sec": 0, 00:19:06.760 "rw_mbytes_per_sec": 0, 00:19:06.760 "r_mbytes_per_sec": 0, 00:19:06.760 "w_mbytes_per_sec": 0 00:19:06.760 }, 00:19:06.760 "claimed": false, 00:19:06.760 "zoned": false, 00:19:06.760 "supported_io_types": { 00:19:06.760 "read": true, 00:19:06.760 "write": true, 00:19:06.760 "unmap": true, 00:19:06.760 "flush": false, 00:19:06.760 "reset": true, 00:19:06.760 "nvme_admin": false, 00:19:06.760 "nvme_io": false, 00:19:06.760 "nvme_io_md": false, 00:19:06.760 "write_zeroes": true, 00:19:06.760 "zcopy": false, 00:19:06.760 "get_zone_info": false, 00:19:06.760 "zone_management": false, 00:19:06.760 "zone_append": false, 00:19:06.760 "compare": false, 00:19:06.760 "compare_and_write": false, 00:19:06.760 "abort": false, 00:19:06.760 "seek_hole": true, 00:19:06.760 "seek_data": true, 00:19:06.760 "copy": false, 00:19:06.760 "nvme_iov_md": false 00:19:06.760 }, 00:19:06.760 "driver_specific": { 00:19:06.760 "lvol": { 00:19:06.760 "lvol_store_uuid": "ee07d6d4-7bf2-4a30-b0e4-945491dd1bb0", 00:19:06.760 "base_bdev": "nvme0n1", 00:19:06.760 "thin_provision": true, 00:19:06.760 "num_allocated_clusters": 0, 00:19:06.760 "snapshot": false, 00:19:06.760 "clone": false, 00:19:06.760 "esnap_clone": false 00:19:06.760 } 00:19:06.760 } 00:19:06.760 } 00:19:06.760 ]' 00:19:06.760 09:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:06.760 09:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:06.760 09:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:06.760 09:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:06.760 09:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:06.760 09:50:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:06.760 09:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:06.760 09:50:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e270f2ec-8d94-4263-8346-910888232459 -c nvc0n1p0 --l2p_dram_limit 20 00:19:07.025 [2024-12-05 09:50:54.416269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.416336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:07.025 [2024-12-05 09:50:54.416351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:07.025 [2024-12-05 09:50:54.416363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.416427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.416441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.025 [2024-12-05 09:50:54.416450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:07.025 [2024-12-05 09:50:54.416460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.416479] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:07.025 [2024-12-05 09:50:54.417246] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:07.025 [2024-12-05 09:50:54.417275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.417286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.025 [2024-12-05 09:50:54.417296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:19:07.025 [2024-12-05 09:50:54.417308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.417384] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5fc12270-af2f-4fff-b4b8-fdf28fa4d685 00:19:07.025 [2024-12-05 09:50:54.419038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.419087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:07.025 [2024-12-05 09:50:54.419107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:07.025 [2024-12-05 09:50:54.419116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.427694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.427933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.025 [2024-12-05 09:50:54.427960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.535 ms 00:19:07.025 [2024-12-05 09:50:54.427972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.428077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.428088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.025 [2024-12-05 09:50:54.428103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:07.025 [2024-12-05 09:50:54.428111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.428169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.428179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:07.025 [2024-12-05 09:50:54.428189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:07.025 [2024-12-05 09:50:54.428197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.428222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:07.025 [2024-12-05 09:50:54.432707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.432738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.025 [2024-12-05 09:50:54.432747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.496 ms 00:19:07.025 [2024-12-05 09:50:54.432759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.432788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.432798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:07.025 [2024-12-05 09:50:54.432806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:07.025 [2024-12-05 09:50:54.432814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.432841] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:07.025 [2024-12-05 09:50:54.432983] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:07.025 [2024-12-05 09:50:54.432995] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:07.025 [2024-12-05 09:50:54.433008] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:07.025 [2024-12-05 09:50:54.433018] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433028] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433036] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:07.025 [2024-12-05 09:50:54.433045] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:07.025 [2024-12-05 09:50:54.433052] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:07.025 [2024-12-05 09:50:54.433062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:07.025 [2024-12-05 09:50:54.433072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.433080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:07.025 [2024-12-05 09:50:54.433088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:19:07.025 [2024-12-05 09:50:54.433097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.433179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.025 [2024-12-05 09:50:54.433188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:07.025 [2024-12-05 09:50:54.433195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:07.025 [2024-12-05 09:50:54.433206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.025 [2024-12-05 09:50:54.433294] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:07.025 [2024-12-05 09:50:54.433306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:07.025 [2024-12-05 09:50:54.433314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:07.025 [2024-12-05 09:50:54.433339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:07.025 [2024-12-05 09:50:54.433360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.025 [2024-12-05 09:50:54.433375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:07.025 [2024-12-05 09:50:54.433391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:07.025 [2024-12-05 09:50:54.433398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.025 [2024-12-05 09:50:54.433406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:07.025 [2024-12-05 09:50:54.433413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:07.025 [2024-12-05 09:50:54.433422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:07.025 [2024-12-05 09:50:54.433436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:07.025 [2024-12-05 09:50:54.433459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:07.025 [2024-12-05 09:50:54.433484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:07.025 [2024-12-05 09:50:54.433525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:07.025 [2024-12-05 09:50:54.433550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:07.025 [2024-12-05 09:50:54.433557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.025 [2024-12-05 09:50:54.433567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:07.025 [2024-12-05 09:50:54.433573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:07.026 [2024-12-05 09:50:54.433581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.026 [2024-12-05 09:50:54.433588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:07.026 [2024-12-05 09:50:54.433596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:07.026 [2024-12-05 09:50:54.433603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.026 [2024-12-05 09:50:54.433612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:07.026 [2024-12-05 09:50:54.433620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:07.026 [2024-12-05 09:50:54.433628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.026 [2024-12-05 09:50:54.433635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:07.026 [2024-12-05 09:50:54.433643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:07.026 [2024-12-05 09:50:54.433650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.026 [2024-12-05 09:50:54.433659] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:07.026 [2024-12-05 09:50:54.433666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:07.026 [2024-12-05 09:50:54.433675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.026 [2024-12-05 09:50:54.433682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.026 [2024-12-05 09:50:54.433693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:07.026 [2024-12-05 09:50:54.433700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:07.026 [2024-12-05 09:50:54.433708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:07.026 [2024-12-05 09:50:54.433715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:07.026 [2024-12-05 09:50:54.433722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:07.026 [2024-12-05 09:50:54.433729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:07.026 [2024-12-05 09:50:54.433739] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:07.026 [2024-12-05 09:50:54.433748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.026 [2024-12-05 09:50:54.433758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:07.026 [2024-12-05 09:50:54.433765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:07.026 [2024-12-05 09:50:54.433774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:07.026 [2024-12-05 09:50:54.433781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:07.026 [2024-12-05 09:50:54.433789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:07.026 [2024-12-05 09:50:54.433796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:07.026 [2024-12-05 09:50:54.433805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:07.026 [2024-12-05 09:50:54.433812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:07.026 [2024-12-05 09:50:54.433823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:07.026 [2024-12-05 09:50:54.433830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:07.026 [2024-12-05 09:50:54.433839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:07.026 [2024-12-05 09:50:54.433846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:07.026 [2024-12-05 09:50:54.433861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:07.026 [2024-12-05 09:50:54.433868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:07.026 [2024-12-05 09:50:54.433877] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:07.026 [2024-12-05 09:50:54.433885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.026 [2024-12-05 09:50:54.433897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:07.026 [2024-12-05 09:50:54.433904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:07.026 [2024-12-05 09:50:54.433913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:07.026 [2024-12-05 09:50:54.433920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:07.026 [2024-12-05 09:50:54.433929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.026 [2024-12-05 09:50:54.433936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:07.026 [2024-12-05 09:50:54.433945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:19:07.026 [2024-12-05 09:50:54.433952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.026 [2024-12-05 09:50:54.433986] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:07.026 [2024-12-05 09:50:54.433995] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:11.229 [2024-12-05 09:50:57.981795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:57.982071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:11.229 [2024-12-05 09:50:57.982106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3547.792 ms 00:19:11.229 [2024-12-05 09:50:57.982117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.014325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.014386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.229 [2024-12-05 09:50:58.014404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.933 ms 00:19:11.229 [2024-12-05 09:50:58.014413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.014579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.014592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:11.229 [2024-12-05 09:50:58.014607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:11.229 [2024-12-05 09:50:58.014615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.057579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.057801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:11.229 [2024-12-05 09:50:58.057830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.904 ms 00:19:11.229 [2024-12-05 09:50:58.057841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.057895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.057905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.229 [2024-12-05 09:50:58.057917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:11.229 [2024-12-05 09:50:58.057927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.058502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.058559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.229 [2024-12-05 09:50:58.058573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:19:11.229 [2024-12-05 09:50:58.058581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.058706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.058716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.229 [2024-12-05 09:50:58.058732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:11.229 [2024-12-05 09:50:58.058741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.074759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.074801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.229 [2024-12-05 09:50:58.074815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.994 ms 00:19:11.229 [2024-12-05 09:50:58.074833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.087910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:11.229 [2024-12-05 09:50:58.095177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.095229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:11.229 [2024-12-05 09:50:58.095241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.259 ms 00:19:11.229 [2024-12-05 09:50:58.095252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.189650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.189723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:11.229 [2024-12-05 09:50:58.189738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.368 ms 00:19:11.229 [2024-12-05 09:50:58.189750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.189960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.189979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:11.229 [2024-12-05 09:50:58.189989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:19:11.229 [2024-12-05 09:50:58.190004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.216097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.216156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:11.229 [2024-12-05 09:50:58.216171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.040 ms 00:19:11.229 [2024-12-05 09:50:58.216182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.229 [2024-12-05 09:50:58.241007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.229 [2024-12-05 09:50:58.241060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:11.229 [2024-12-05 09:50:58.241073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.774 ms 00:19:11.229 [2024-12-05 09:50:58.241083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.230 [2024-12-05 09:50:58.241741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.230 [2024-12-05 09:50:58.241766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:11.230 [2024-12-05 09:50:58.241777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:19:11.230 [2024-12-05 09:50:58.241787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.230 [2024-12-05 09:50:58.324628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.230 [2024-12-05 09:50:58.324693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:11.230 [2024-12-05 09:50:58.324708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.798 ms 00:19:11.230 [2024-12-05 09:50:58.324719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.230 [2024-12-05 09:50:58.353484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.230 [2024-12-05 09:50:58.353552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:11.230 [2024-12-05 09:50:58.353570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.653 ms 00:19:11.230 [2024-12-05 09:50:58.353581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.230 [2024-12-05 09:50:58.379362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.230 [2024-12-05 09:50:58.379414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:11.230 [2024-12-05 09:50:58.379427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.732 ms 00:19:11.230 [2024-12-05 09:50:58.379437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.230 [2024-12-05 09:50:58.405915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.230 [2024-12-05 09:50:58.405976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:11.230 [2024-12-05 09:50:58.405990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.430 ms 00:19:11.230 [2024-12-05 09:50:58.406001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.230 [2024-12-05 09:50:58.406054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.230 [2024-12-05 09:50:58.406070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:11.230 [2024-12-05 09:50:58.406080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:11.230 [2024-12-05 09:50:58.406091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.230 [2024-12-05 09:50:58.406186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.230 [2024-12-05 09:50:58.406199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:11.230 [2024-12-05 09:50:58.406208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:11.230 [2024-12-05 09:50:58.406219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.230 [2024-12-05 09:50:58.407429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3990.669 ms, result 0 00:19:11.230 { 00:19:11.230 "name": "ftl0", 00:19:11.230 "uuid": "5fc12270-af2f-4fff-b4b8-fdf28fa4d685" 00:19:11.230 } 00:19:11.230 09:50:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:11.230 09:50:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:11.230 09:50:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:11.230 09:50:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:11.230 [2024-12-05 09:50:58.743553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:11.230 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:11.230 Zero copy mechanism will not be used. 00:19:11.230 Running I/O for 4 seconds... 00:19:13.562 668.00 IOPS, 44.36 MiB/s [2024-12-05T09:51:01.764Z] 818.50 IOPS, 54.35 MiB/s [2024-12-05T09:51:03.153Z] 818.33 IOPS, 54.34 MiB/s [2024-12-05T09:51:03.153Z] 1325.00 IOPS, 87.99 MiB/s 00:19:15.524 Latency(us) 00:19:15.524 [2024-12-05T09:51:03.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.524 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:15.524 ftl0 : 4.00 1324.12 87.93 0.00 0.00 796.33 153.60 4436.28 00:19:15.524 [2024-12-05T09:51:03.153Z] =================================================================================================================== 00:19:15.524 [2024-12-05T09:51:03.153Z] Total : 1324.12 87.93 0.00 0.00 796.33 153.60 4436.28 00:19:15.524 { 00:19:15.524 "results": [ 00:19:15.524 { 00:19:15.524 "job": "ftl0", 00:19:15.524 "core_mask": "0x1", 00:19:15.524 "workload": "randwrite", 00:19:15.524 "status": "finished", 00:19:15.524 "queue_depth": 1, 00:19:15.524 "io_size": 69632, 00:19:15.524 "runtime": 4.003407, 00:19:15.524 "iops": 1324.122178934093, 00:19:15.524 "mibps": 87.92998844484211, 00:19:15.524 "io_failed": 0, 00:19:15.524 "io_timeout": 0, 00:19:15.524 "avg_latency_us": 796.3300277161059, 00:19:15.524 "min_latency_us": 153.6, 00:19:15.524 "max_latency_us": 4436.283076923077 00:19:15.524 } 00:19:15.524 ], 00:19:15.524 "core_count": 1 00:19:15.524 } 00:19:15.524 [2024-12-05 09:51:02.756273] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:15.524 09:51:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:15.524 [2024-12-05 09:51:02.868608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:15.524 Running I/O for 4 seconds... 00:19:17.415 6083.00 IOPS, 23.76 MiB/s [2024-12-05T09:51:05.994Z] 5317.50 IOPS, 20.77 MiB/s [2024-12-05T09:51:06.933Z] 5155.67 IOPS, 20.14 MiB/s [2024-12-05T09:51:06.933Z] 5235.75 IOPS, 20.45 MiB/s 00:19:19.304 Latency(us) 00:19:19.304 [2024-12-05T09:51:06.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.304 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:19.304 ftl0 : 4.03 5224.38 20.41 0.00 0.00 24399.62 340.28 49404.06 00:19:19.304 [2024-12-05T09:51:06.933Z] =================================================================================================================== 00:19:19.304 [2024-12-05T09:51:06.933Z] Total : 5224.38 20.41 0.00 0.00 24399.62 0.00 49404.06 00:19:19.304 { 00:19:19.304 "results": [ 00:19:19.304 { 00:19:19.304 "job": "ftl0", 00:19:19.304 "core_mask": "0x1", 00:19:19.304 "workload": "randwrite", 00:19:19.304 "status": "finished", 00:19:19.304 "queue_depth": 128, 00:19:19.304 "io_size": 4096, 00:19:19.304 "runtime": 4.033205, 00:19:19.304 "iops": 5224.381106340987, 00:19:19.304 "mibps": 20.40773869664448, 00:19:19.304 "io_failed": 0, 00:19:19.304 "io_timeout": 0, 00:19:19.304 "avg_latency_us": 24399.61561402292, 00:19:19.304 "min_latency_us": 340.2830769230769, 00:19:19.304 "max_latency_us": 49404.06153846154 00:19:19.304 } 00:19:19.304 ], 00:19:19.304 "core_count": 1 00:19:19.304 } 00:19:19.304 [2024-12-05 09:51:06.912661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:19.564 09:51:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:19.564 [2024-12-05 09:51:07.034312] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:19.564 Running I/O for 4 seconds... 00:19:21.448 4721.00 IOPS, 18.44 MiB/s [2024-12-05T09:51:10.465Z] 4608.00 IOPS, 18.00 MiB/s [2024-12-05T09:51:11.410Z] 4581.67 IOPS, 17.90 MiB/s [2024-12-05T09:51:11.410Z] 4602.00 IOPS, 17.98 MiB/s 00:19:23.781 Latency(us) 00:19:23.781 [2024-12-05T09:51:11.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.781 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:23.781 Verification LBA range: start 0x0 length 0x1400000 00:19:23.781 ftl0 : 4.02 4614.89 18.03 0.00 0.00 27653.59 302.47 38313.35 00:19:23.781 [2024-12-05T09:51:11.410Z] =================================================================================================================== 00:19:23.781 [2024-12-05T09:51:11.410Z] Total : 4614.89 18.03 0.00 0.00 27653.59 0.00 38313.35 00:19:23.781 [2024-12-05 09:51:11.066545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:23.781 { 00:19:23.781 "results": [ 00:19:23.781 { 00:19:23.781 "job": "ftl0", 00:19:23.781 "core_mask": "0x1", 00:19:23.781 "workload": "verify", 00:19:23.781 "status": "finished", 00:19:23.781 "verify_range": { 00:19:23.781 "start": 0, 00:19:23.781 "length": 20971520 00:19:23.781 }, 00:19:23.781 "queue_depth": 128, 00:19:23.781 "io_size": 4096, 00:19:23.781 "runtime": 4.015695, 00:19:23.781 "iops": 4614.89231627402, 00:19:23.781 "mibps": 18.02692311044539, 00:19:23.781 "io_failed": 0, 00:19:23.781 "io_timeout": 0, 00:19:23.781 "avg_latency_us": 27653.59056667054, 00:19:23.781 "min_latency_us": 302.4738461538462, 00:19:23.781 "max_latency_us": 38313.35384615385 00:19:23.781 } 00:19:23.781 ], 00:19:23.781 "core_count": 1 00:19:23.781 } 00:19:23.781 09:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:23.781 [2024-12-05 09:51:11.285662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.781 [2024-12-05 09:51:11.285716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:23.781 [2024-12-05 09:51:11.285729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:23.781 [2024-12-05 09:51:11.285740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.781 [2024-12-05 09:51:11.285761] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:23.781 [2024-12-05 09:51:11.288700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.781 [2024-12-05 09:51:11.288742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:23.781 [2024-12-05 09:51:11.288756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:19:23.781 [2024-12-05 09:51:11.288783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.781 [2024-12-05 09:51:11.291709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.781 [2024-12-05 09:51:11.291760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:23.781 [2024-12-05 09:51:11.291777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:19:23.781 [2024-12-05 09:51:11.291785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.043 [2024-12-05 09:51:11.522596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.043 [2024-12-05 09:51:11.522810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:24.043 [2024-12-05 09:51:11.522843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 230.783 ms 00:19:24.043 [2024-12-05 09:51:11.522853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.043 [2024-12-05 09:51:11.529106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.043 [2024-12-05 09:51:11.529154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:24.043 [2024-12-05 09:51:11.529169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.190 ms 00:19:24.043 [2024-12-05 09:51:11.529182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.043 [2024-12-05 09:51:11.555883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.043 [2024-12-05 09:51:11.556076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:24.043 [2024-12-05 09:51:11.556147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.625 ms 00:19:24.043 [2024-12-05 09:51:11.556171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.043 [2024-12-05 09:51:11.573699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.043 [2024-12-05 09:51:11.573887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:24.043 [2024-12-05 09:51:11.573914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.448 ms 00:19:24.043 [2024-12-05 09:51:11.573923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.043 [2024-12-05 09:51:11.574082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.043 [2024-12-05 09:51:11.574094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:24.043 [2024-12-05 09:51:11.574109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:24.043 [2024-12-05 09:51:11.574117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.043 [2024-12-05 09:51:11.600398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.043 [2024-12-05 09:51:11.600445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:24.043 [2024-12-05 09:51:11.600460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.260 ms 00:19:24.043 [2024-12-05 09:51:11.600467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.043 [2024-12-05 09:51:11.626139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.043 [2024-12-05 09:51:11.626198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:24.043 [2024-12-05 09:51:11.626212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.600 ms 00:19:24.043 [2024-12-05 09:51:11.626220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.043 [2024-12-05 09:51:11.651236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.043 [2024-12-05 09:51:11.651283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:24.043 [2024-12-05 09:51:11.651298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.962 ms 00:19:24.043 [2024-12-05 09:51:11.651306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.306 [2024-12-05 09:51:11.676097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.306 [2024-12-05 09:51:11.676142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:24.306 [2024-12-05 09:51:11.676159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.698 ms 00:19:24.306 [2024-12-05 09:51:11.676167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.306 [2024-12-05 09:51:11.676215] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:24.306 [2024-12-05 09:51:11.676231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:24.306 [2024-12-05 09:51:11.676398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.676995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:24.307 [2024-12-05 09:51:11.677186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:24.307 [2024-12-05 09:51:11.677197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5fc12270-af2f-4fff-b4b8-fdf28fa4d685 00:19:24.307 [2024-12-05 09:51:11.677207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:24.307 [2024-12-05 09:51:11.677217] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:24.307 [2024-12-05 09:51:11.677224] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:24.307 [2024-12-05 09:51:11.677234] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:24.307 [2024-12-05 09:51:11.677241] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:24.307 [2024-12-05 09:51:11.677251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:24.307 [2024-12-05 09:51:11.677258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:24.307 [2024-12-05 09:51:11.677269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:24.308 [2024-12-05 09:51:11.677276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:24.308 [2024-12-05 09:51:11.677285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.308 [2024-12-05 09:51:11.677293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:24.308 [2024-12-05 09:51:11.677304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:19:24.308 [2024-12-05 09:51:11.677312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.690992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.308 [2024-12-05 09:51:11.691181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:24.308 [2024-12-05 09:51:11.691205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.634 ms 00:19:24.308 [2024-12-05 09:51:11.691214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.691649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.308 [2024-12-05 09:51:11.691663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:24.308 [2024-12-05 09:51:11.691676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:19:24.308 [2024-12-05 09:51:11.691684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.730868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.730926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.308 [2024-12-05 09:51:11.730943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.730952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.731019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.731029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.308 [2024-12-05 09:51:11.731039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.731046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.731134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.731146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.308 [2024-12-05 09:51:11.731156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.731164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.731181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.731190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.308 [2024-12-05 09:51:11.731199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.731207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.815326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.815384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.308 [2024-12-05 09:51:11.815403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.815413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.884222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.884279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.308 [2024-12-05 09:51:11.884294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.884303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.884412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.884424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.308 [2024-12-05 09:51:11.884435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.884444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.884491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.884502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.308 [2024-12-05 09:51:11.884541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.884549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.884651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.884666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.308 [2024-12-05 09:51:11.884681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.884690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.884725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.884734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:24.308 [2024-12-05 09:51:11.884744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.884752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.884793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.884806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.308 [2024-12-05 09:51:11.884816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.884832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.884880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.308 [2024-12-05 09:51:11.884891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.308 [2024-12-05 09:51:11.884901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.308 [2024-12-05 09:51:11.884909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.308 [2024-12-05 09:51:11.885052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 599.343 ms, result 0 00:19:24.308 true 00:19:24.308 09:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75962 00:19:24.308 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75962 ']' 00:19:24.308 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75962 00:19:24.308 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:24.308 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.308 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75962 00:19:24.570 killing process with pid 75962 00:19:24.570 Received shutdown signal, test time was about 4.000000 seconds 00:19:24.570 00:19:24.570 Latency(us) 00:19:24.570 [2024-12-05T09:51:12.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.570 [2024-12-05T09:51:12.199Z] =================================================================================================================== 00:19:24.570 [2024-12-05T09:51:12.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.570 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.570 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.570 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75962' 00:19:24.570 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75962 00:19:24.570 09:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75962 00:19:28.785 Remove shared memory files 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:28.785 00:19:28.785 real 0m25.988s 00:19:28.785 user 0m28.469s 00:19:28.785 sys 0m1.075s 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.785 ************************************ 00:19:28.785 END TEST ftl_bdevperf 00:19:28.785 ************************************ 00:19:28.785 09:51:16 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:29.048 09:51:16 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:29.048 09:51:16 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:29.048 09:51:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.048 09:51:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:29.048 ************************************ 00:19:29.048 START TEST ftl_trim 00:19:29.048 ************************************ 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:29.048 * Looking for test storage... 00:19:29.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.048 09:51:16 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.048 --rc genhtml_branch_coverage=1 00:19:29.048 --rc genhtml_function_coverage=1 00:19:29.048 --rc genhtml_legend=1 00:19:29.048 --rc geninfo_all_blocks=1 00:19:29.048 --rc geninfo_unexecuted_blocks=1 00:19:29.048 00:19:29.048 ' 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.048 --rc genhtml_branch_coverage=1 00:19:29.048 --rc genhtml_function_coverage=1 00:19:29.048 --rc genhtml_legend=1 00:19:29.048 --rc geninfo_all_blocks=1 00:19:29.048 --rc geninfo_unexecuted_blocks=1 00:19:29.048 00:19:29.048 ' 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.048 --rc genhtml_branch_coverage=1 00:19:29.048 --rc genhtml_function_coverage=1 00:19:29.048 --rc genhtml_legend=1 00:19:29.048 --rc geninfo_all_blocks=1 00:19:29.048 --rc geninfo_unexecuted_blocks=1 00:19:29.048 00:19:29.048 ' 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.048 --rc genhtml_branch_coverage=1 00:19:29.048 --rc genhtml_function_coverage=1 00:19:29.048 --rc genhtml_legend=1 00:19:29.048 --rc geninfo_all_blocks=1 00:19:29.048 --rc geninfo_unexecuted_blocks=1 00:19:29.048 00:19:29.048 ' 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76320 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:29.048 09:51:16 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76320 00:19:29.048 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76320 ']' 00:19:29.049 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.049 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.049 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.049 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.049 09:51:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:29.309 [2024-12-05 09:51:16.735557] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:29.309 [2024-12-05 09:51:16.735731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76320 ] 00:19:29.309 [2024-12-05 09:51:16.904958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:29.568 [2024-12-05 09:51:17.027434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.568 [2024-12-05 09:51:17.027809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.568 [2024-12-05 09:51:17.027893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.140 09:51:17 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.140 09:51:17 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:30.140 09:51:17 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:30.140 09:51:17 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:30.140 09:51:17 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:30.140 09:51:17 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:30.140 09:51:17 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:30.140 09:51:17 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:30.713 09:51:18 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:30.713 09:51:18 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:30.713 09:51:18 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:30.713 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:30.713 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:30.713 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:30.713 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:30.713 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:30.713 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:30.713 { 00:19:30.713 "name": "nvme0n1", 00:19:30.713 "aliases": [ 00:19:30.713 "8fac9e1c-606a-4eeb-bd56-9a2bc517053d" 00:19:30.713 ], 00:19:30.713 "product_name": "NVMe disk", 00:19:30.713 "block_size": 4096, 00:19:30.713 "num_blocks": 1310720, 00:19:30.713 "uuid": "8fac9e1c-606a-4eeb-bd56-9a2bc517053d", 00:19:30.713 "numa_id": -1, 00:19:30.714 "assigned_rate_limits": { 00:19:30.714 "rw_ios_per_sec": 0, 00:19:30.714 "rw_mbytes_per_sec": 0, 00:19:30.714 "r_mbytes_per_sec": 0, 00:19:30.714 "w_mbytes_per_sec": 0 00:19:30.714 }, 00:19:30.714 "claimed": true, 00:19:30.714 "claim_type": "read_many_write_one", 00:19:30.714 "zoned": false, 00:19:30.714 "supported_io_types": { 00:19:30.714 "read": true, 00:19:30.714 "write": true, 00:19:30.714 "unmap": true, 00:19:30.714 "flush": true, 00:19:30.714 "reset": true, 00:19:30.714 "nvme_admin": true, 00:19:30.714 "nvme_io": true, 00:19:30.714 "nvme_io_md": false, 00:19:30.714 "write_zeroes": true, 00:19:30.714 "zcopy": false, 00:19:30.714 "get_zone_info": false, 00:19:30.714 "zone_management": false, 00:19:30.714 "zone_append": false, 00:19:30.714 "compare": true, 00:19:30.714 "compare_and_write": false, 00:19:30.714 "abort": true, 00:19:30.714 "seek_hole": false, 00:19:30.714 "seek_data": false, 00:19:30.714 "copy": true, 00:19:30.714 "nvme_iov_md": false 00:19:30.714 }, 00:19:30.714 "driver_specific": { 00:19:30.714 "nvme": [ 00:19:30.714 { 00:19:30.714 "pci_address": "0000:00:11.0", 00:19:30.714 "trid": { 00:19:30.714 "trtype": "PCIe", 00:19:30.714 "traddr": "0000:00:11.0" 00:19:30.714 }, 00:19:30.714 "ctrlr_data": { 00:19:30.714 "cntlid": 0, 00:19:30.714 "vendor_id": "0x1b36", 00:19:30.714 "model_number": "QEMU NVMe Ctrl", 00:19:30.714 "serial_number": "12341", 00:19:30.714 "firmware_revision": "8.0.0", 00:19:30.714 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:30.714 "oacs": { 00:19:30.714 "security": 0, 00:19:30.714 "format": 1, 00:19:30.714 "firmware": 0, 00:19:30.714 "ns_manage": 1 00:19:30.714 }, 00:19:30.714 "multi_ctrlr": false, 00:19:30.714 "ana_reporting": false 00:19:30.714 }, 00:19:30.714 "vs": { 00:19:30.714 "nvme_version": "1.4" 00:19:30.714 }, 00:19:30.714 "ns_data": { 00:19:30.714 "id": 1, 00:19:30.714 "can_share": false 00:19:30.714 } 00:19:30.714 } 00:19:30.714 ], 00:19:30.714 "mp_policy": "active_passive" 00:19:30.714 } 00:19:30.714 } 00:19:30.714 ]' 00:19:30.714 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:30.714 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:30.714 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:30.714 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:30.714 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:30.714 09:51:18 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:30.974 09:51:18 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:30.974 09:51:18 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:30.974 09:51:18 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:30.974 09:51:18 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:30.974 09:51:18 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:30.974 09:51:18 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ee07d6d4-7bf2-4a30-b0e4-945491dd1bb0 00:19:30.974 09:51:18 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:30.974 09:51:18 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee07d6d4-7bf2-4a30-b0e4-945491dd1bb0 00:19:31.233 09:51:18 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:31.491 09:51:18 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=6e7e50e2-e152-49c5-b944-60af194b7354 00:19:31.491 09:51:18 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6e7e50e2-e152-49c5-b944-60af194b7354 00:19:31.750 09:51:19 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:31.750 09:51:19 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:31.750 09:51:19 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:31.750 09:51:19 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:31.750 09:51:19 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:31.750 09:51:19 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:31.750 09:51:19 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:31.750 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:31.750 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:31.750 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:31.750 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:31.750 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:31.750 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:31.750 { 00:19:31.750 "name": "cba7f406-46f8-4a91-abba-1faeb9392aff", 00:19:31.750 "aliases": [ 00:19:31.751 "lvs/nvme0n1p0" 00:19:31.751 ], 00:19:31.751 "product_name": "Logical Volume", 00:19:31.751 "block_size": 4096, 00:19:31.751 "num_blocks": 26476544, 00:19:31.751 "uuid": "cba7f406-46f8-4a91-abba-1faeb9392aff", 00:19:31.751 "assigned_rate_limits": { 00:19:31.751 "rw_ios_per_sec": 0, 00:19:31.751 "rw_mbytes_per_sec": 0, 00:19:31.751 "r_mbytes_per_sec": 0, 00:19:31.751 "w_mbytes_per_sec": 0 00:19:31.751 }, 00:19:31.751 "claimed": false, 00:19:31.751 "zoned": false, 00:19:31.751 "supported_io_types": { 00:19:31.751 "read": true, 00:19:31.751 "write": true, 00:19:31.751 "unmap": true, 00:19:31.751 "flush": false, 00:19:31.751 "reset": true, 00:19:31.751 "nvme_admin": false, 00:19:31.751 "nvme_io": false, 00:19:31.751 "nvme_io_md": false, 00:19:31.751 "write_zeroes": true, 00:19:31.751 "zcopy": false, 00:19:31.751 "get_zone_info": false, 00:19:31.751 "zone_management": false, 00:19:31.751 "zone_append": false, 00:19:31.751 "compare": false, 00:19:31.751 "compare_and_write": false, 00:19:31.751 "abort": false, 00:19:31.751 "seek_hole": true, 00:19:31.751 "seek_data": true, 00:19:31.751 "copy": false, 00:19:31.751 "nvme_iov_md": false 00:19:31.751 }, 00:19:31.751 "driver_specific": { 00:19:31.751 "lvol": { 00:19:31.751 "lvol_store_uuid": "6e7e50e2-e152-49c5-b944-60af194b7354", 00:19:31.751 "base_bdev": "nvme0n1", 00:19:31.751 "thin_provision": true, 00:19:31.751 "num_allocated_clusters": 0, 00:19:31.751 "snapshot": false, 00:19:31.751 "clone": false, 00:19:31.751 "esnap_clone": false 00:19:31.751 } 00:19:31.751 } 00:19:31.751 } 00:19:31.751 ]' 00:19:31.751 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:32.010 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:32.010 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:32.010 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:32.010 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:32.010 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:32.010 09:51:19 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:32.010 09:51:19 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:32.010 09:51:19 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:32.268 09:51:19 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:32.268 09:51:19 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:32.268 09:51:19 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:32.268 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:32.268 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:32.268 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:32.268 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:32.268 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:32.268 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:32.268 { 00:19:32.268 "name": "cba7f406-46f8-4a91-abba-1faeb9392aff", 00:19:32.268 "aliases": [ 00:19:32.268 "lvs/nvme0n1p0" 00:19:32.268 ], 00:19:32.268 "product_name": "Logical Volume", 00:19:32.268 "block_size": 4096, 00:19:32.268 "num_blocks": 26476544, 00:19:32.268 "uuid": "cba7f406-46f8-4a91-abba-1faeb9392aff", 00:19:32.268 "assigned_rate_limits": { 00:19:32.268 "rw_ios_per_sec": 0, 00:19:32.268 "rw_mbytes_per_sec": 0, 00:19:32.268 "r_mbytes_per_sec": 0, 00:19:32.268 "w_mbytes_per_sec": 0 00:19:32.268 }, 00:19:32.268 "claimed": false, 00:19:32.268 "zoned": false, 00:19:32.268 "supported_io_types": { 00:19:32.268 "read": true, 00:19:32.268 "write": true, 00:19:32.268 "unmap": true, 00:19:32.268 "flush": false, 00:19:32.268 "reset": true, 00:19:32.268 "nvme_admin": false, 00:19:32.268 "nvme_io": false, 00:19:32.268 "nvme_io_md": false, 00:19:32.268 "write_zeroes": true, 00:19:32.268 "zcopy": false, 00:19:32.268 "get_zone_info": false, 00:19:32.268 "zone_management": false, 00:19:32.268 "zone_append": false, 00:19:32.268 "compare": false, 00:19:32.268 "compare_and_write": false, 00:19:32.268 "abort": false, 00:19:32.268 "seek_hole": true, 00:19:32.268 "seek_data": true, 00:19:32.268 "copy": false, 00:19:32.268 "nvme_iov_md": false 00:19:32.268 }, 00:19:32.268 "driver_specific": { 00:19:32.268 "lvol": { 00:19:32.268 "lvol_store_uuid": "6e7e50e2-e152-49c5-b944-60af194b7354", 00:19:32.269 "base_bdev": "nvme0n1", 00:19:32.269 "thin_provision": true, 00:19:32.269 "num_allocated_clusters": 0, 00:19:32.269 "snapshot": false, 00:19:32.269 "clone": false, 00:19:32.269 "esnap_clone": false 00:19:32.269 } 00:19:32.269 } 00:19:32.269 } 00:19:32.269 ]' 00:19:32.269 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:32.528 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:32.528 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:32.528 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:32.528 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:32.528 09:51:19 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:32.528 09:51:19 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:32.528 09:51:19 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:32.528 09:51:20 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:32.528 09:51:20 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:32.528 09:51:20 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:32.528 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:32.528 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:32.528 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:32.528 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:32.528 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cba7f406-46f8-4a91-abba-1faeb9392aff 00:19:32.806 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:32.806 { 00:19:32.806 "name": "cba7f406-46f8-4a91-abba-1faeb9392aff", 00:19:32.806 "aliases": [ 00:19:32.806 "lvs/nvme0n1p0" 00:19:32.806 ], 00:19:32.806 "product_name": "Logical Volume", 00:19:32.806 "block_size": 4096, 00:19:32.806 "num_blocks": 26476544, 00:19:32.806 "uuid": "cba7f406-46f8-4a91-abba-1faeb9392aff", 00:19:32.806 "assigned_rate_limits": { 00:19:32.806 "rw_ios_per_sec": 0, 00:19:32.806 "rw_mbytes_per_sec": 0, 00:19:32.806 "r_mbytes_per_sec": 0, 00:19:32.806 "w_mbytes_per_sec": 0 00:19:32.806 }, 00:19:32.806 "claimed": false, 00:19:32.806 "zoned": false, 00:19:32.806 "supported_io_types": { 00:19:32.806 "read": true, 00:19:32.806 "write": true, 00:19:32.806 "unmap": true, 00:19:32.806 "flush": false, 00:19:32.806 "reset": true, 00:19:32.806 "nvme_admin": false, 00:19:32.806 "nvme_io": false, 00:19:32.806 "nvme_io_md": false, 00:19:32.806 "write_zeroes": true, 00:19:32.806 "zcopy": false, 00:19:32.806 "get_zone_info": false, 00:19:32.806 "zone_management": false, 00:19:32.806 "zone_append": false, 00:19:32.806 "compare": false, 00:19:32.806 "compare_and_write": false, 00:19:32.806 "abort": false, 00:19:32.806 "seek_hole": true, 00:19:32.806 "seek_data": true, 00:19:32.806 "copy": false, 00:19:32.806 "nvme_iov_md": false 00:19:32.806 }, 00:19:32.806 "driver_specific": { 00:19:32.806 "lvol": { 00:19:32.806 "lvol_store_uuid": "6e7e50e2-e152-49c5-b944-60af194b7354", 00:19:32.806 "base_bdev": "nvme0n1", 00:19:32.806 "thin_provision": true, 00:19:32.806 "num_allocated_clusters": 0, 00:19:32.806 "snapshot": false, 00:19:32.806 "clone": false, 00:19:32.806 "esnap_clone": false 00:19:32.806 } 00:19:32.806 } 00:19:32.806 } 00:19:32.806 ]' 00:19:32.806 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:32.806 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:32.806 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:32.806 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:32.806 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:32.806 09:51:20 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:32.806 09:51:20 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:32.806 09:51:20 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cba7f406-46f8-4a91-abba-1faeb9392aff -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:33.096 [2024-12-05 09:51:20.577201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.577242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:33.096 [2024-12-05 09:51:20.577256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:33.096 [2024-12-05 09:51:20.577262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.579473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.579505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.096 [2024-12-05 09:51:20.579525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.188 ms 00:19:33.096 [2024-12-05 09:51:20.579531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.579607] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:33.096 [2024-12-05 09:51:20.580146] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:33.096 [2024-12-05 09:51:20.580171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.580177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.096 [2024-12-05 09:51:20.580186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:19:33.096 [2024-12-05 09:51:20.580192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.580281] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7839b8c0-c26e-4e3f-8cc1-f17ca740b133 00:19:33.096 [2024-12-05 09:51:20.581277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.581306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:33.096 [2024-12-05 09:51:20.581315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:33.096 [2024-12-05 09:51:20.581322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.586530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.586556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.096 [2024-12-05 09:51:20.586566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.139 ms 00:19:33.096 [2024-12-05 09:51:20.586573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.586671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.586681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.096 [2024-12-05 09:51:20.586687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:33.096 [2024-12-05 09:51:20.586697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.586729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.586738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:33.096 [2024-12-05 09:51:20.586744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:33.096 [2024-12-05 09:51:20.586752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.586776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:33.096 [2024-12-05 09:51:20.589665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.589694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.096 [2024-12-05 09:51:20.589704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:19:33.096 [2024-12-05 09:51:20.589710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.589755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.096 [2024-12-05 09:51:20.589774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:33.096 [2024-12-05 09:51:20.589781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:33.096 [2024-12-05 09:51:20.589787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.096 [2024-12-05 09:51:20.589817] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:33.096 [2024-12-05 09:51:20.589931] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:33.096 [2024-12-05 09:51:20.589949] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:33.096 [2024-12-05 09:51:20.589958] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:33.096 [2024-12-05 09:51:20.589967] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:33.097 [2024-12-05 09:51:20.589974] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:33.097 [2024-12-05 09:51:20.589981] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:33.097 [2024-12-05 09:51:20.589987] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:33.097 [2024-12-05 09:51:20.589995] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:33.097 [2024-12-05 09:51:20.590003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:33.097 [2024-12-05 09:51:20.590009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.097 [2024-12-05 09:51:20.590015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:33.097 [2024-12-05 09:51:20.590022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:19:33.097 [2024-12-05 09:51:20.590027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.097 [2024-12-05 09:51:20.590111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.097 [2024-12-05 09:51:20.590118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:33.097 [2024-12-05 09:51:20.590124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:33.097 [2024-12-05 09:51:20.590129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.097 [2024-12-05 09:51:20.590219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:33.097 [2024-12-05 09:51:20.590231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:33.097 [2024-12-05 09:51:20.590239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:33.097 [2024-12-05 09:51:20.590258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:33.097 [2024-12-05 09:51:20.590277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.097 [2024-12-05 09:51:20.590288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:33.097 [2024-12-05 09:51:20.590293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:33.097 [2024-12-05 09:51:20.590301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.097 [2024-12-05 09:51:20.590306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:33.097 [2024-12-05 09:51:20.590312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:33.097 [2024-12-05 09:51:20.590317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:33.097 [2024-12-05 09:51:20.590332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:33.097 [2024-12-05 09:51:20.590351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:33.097 [2024-12-05 09:51:20.590368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:33.097 [2024-12-05 09:51:20.590386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:33.097 [2024-12-05 09:51:20.590402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:33.097 [2024-12-05 09:51:20.590421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.097 [2024-12-05 09:51:20.590433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:33.097 [2024-12-05 09:51:20.590438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:33.097 [2024-12-05 09:51:20.590444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.097 [2024-12-05 09:51:20.590449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:33.097 [2024-12-05 09:51:20.590456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:33.097 [2024-12-05 09:51:20.590461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:33.097 [2024-12-05 09:51:20.590473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:33.097 [2024-12-05 09:51:20.590479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590484] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:33.097 [2024-12-05 09:51:20.590491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:33.097 [2024-12-05 09:51:20.590496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.097 [2024-12-05 09:51:20.590517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:33.097 [2024-12-05 09:51:20.590526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:33.097 [2024-12-05 09:51:20.590532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:33.097 [2024-12-05 09:51:20.590539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:33.097 [2024-12-05 09:51:20.590544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:33.097 [2024-12-05 09:51:20.590551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:33.097 [2024-12-05 09:51:20.590557] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:33.097 [2024-12-05 09:51:20.590565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.097 [2024-12-05 09:51:20.590574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:33.097 [2024-12-05 09:51:20.590580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:33.097 [2024-12-05 09:51:20.590586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:33.097 [2024-12-05 09:51:20.590593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:33.097 [2024-12-05 09:51:20.590598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:33.097 [2024-12-05 09:51:20.590607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:33.097 [2024-12-05 09:51:20.590613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:33.097 [2024-12-05 09:51:20.590620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:33.097 [2024-12-05 09:51:20.590625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:33.097 [2024-12-05 09:51:20.590636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:33.097 [2024-12-05 09:51:20.590642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:33.097 [2024-12-05 09:51:20.590649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:33.097 [2024-12-05 09:51:20.590654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:33.097 [2024-12-05 09:51:20.590662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:33.097 [2024-12-05 09:51:20.590668] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:33.097 [2024-12-05 09:51:20.590678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.097 [2024-12-05 09:51:20.590684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:33.097 [2024-12-05 09:51:20.590691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:33.097 [2024-12-05 09:51:20.590696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:33.097 [2024-12-05 09:51:20.590703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:33.097 [2024-12-05 09:51:20.590710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.097 [2024-12-05 09:51:20.590716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:33.097 [2024-12-05 09:51:20.590722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:19:33.097 [2024-12-05 09:51:20.590729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.097 [2024-12-05 09:51:20.590804] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:33.097 [2024-12-05 09:51:20.590815] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:35.630 [2024-12-05 09:51:22.774008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.774061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:35.630 [2024-12-05 09:51:22.774076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2183.194 ms 00:19:35.630 [2024-12-05 09:51:22.774087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.799762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.799808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.630 [2024-12-05 09:51:22.799820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.381 ms 00:19:35.630 [2024-12-05 09:51:22.799830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.799968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.799981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.630 [2024-12-05 09:51:22.800003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:35.630 [2024-12-05 09:51:22.800014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.844296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.844339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.630 [2024-12-05 09:51:22.844352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.243 ms 00:19:35.630 [2024-12-05 09:51:22.844363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.844450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.844464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.630 [2024-12-05 09:51:22.844473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:35.630 [2024-12-05 09:51:22.844482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.844823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.844849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.630 [2024-12-05 09:51:22.844858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:19:35.630 [2024-12-05 09:51:22.844868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.844978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.844994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.630 [2024-12-05 09:51:22.845013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:35.630 [2024-12-05 09:51:22.845025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.859305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.859338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.630 [2024-12-05 09:51:22.859348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.249 ms 00:19:35.630 [2024-12-05 09:51:22.859357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.870610] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.630 [2024-12-05 09:51:22.884878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.884912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.630 [2024-12-05 09:51:22.884924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.422 ms 00:19:35.630 [2024-12-05 09:51:22.884931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.949132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.949180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:35.630 [2024-12-05 09:51:22.949196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.133 ms 00:19:35.630 [2024-12-05 09:51:22.949206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.949439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.949458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.630 [2024-12-05 09:51:22.949471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:35.630 [2024-12-05 09:51:22.949479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.972862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.972898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:35.630 [2024-12-05 09:51:22.972911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.333 ms 00:19:35.630 [2024-12-05 09:51:22.972919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.995311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.995343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:35.630 [2024-12-05 09:51:22.995356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.326 ms 00:19:35.630 [2024-12-05 09:51:22.995364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:22.995957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:22.995982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.630 [2024-12-05 09:51:22.995993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:19:35.630 [2024-12-05 09:51:22.996001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:23.062602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:23.062640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:35.630 [2024-12-05 09:51:23.062656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.573 ms 00:19:35.630 [2024-12-05 09:51:23.062664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:23.086744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:23.086778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:35.630 [2024-12-05 09:51:23.086791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.990 ms 00:19:35.630 [2024-12-05 09:51:23.086799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:23.109152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:23.109183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:35.630 [2024-12-05 09:51:23.109196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.293 ms 00:19:35.630 [2024-12-05 09:51:23.109203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:23.132176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:23.132224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.630 [2024-12-05 09:51:23.132237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.901 ms 00:19:35.630 [2024-12-05 09:51:23.132245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:23.132311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:23.132324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.630 [2024-12-05 09:51:23.132337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.630 [2024-12-05 09:51:23.132344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:23.132417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-12-05 09:51:23.132426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.630 [2024-12-05 09:51:23.132436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:35.630 [2024-12-05 09:51:23.132443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-12-05 09:51:23.133410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.630 [2024-12-05 09:51:23.136258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2555.944 ms, result 0 00:19:35.630 [2024-12-05 09:51:23.137090] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.630 { 00:19:35.630 "name": "ftl0", 00:19:35.630 "uuid": "7839b8c0-c26e-4e3f-8cc1-f17ca740b133" 00:19:35.630 } 00:19:35.630 09:51:23 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:35.630 09:51:23 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:35.630 09:51:23 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:35.630 09:51:23 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:35.630 09:51:23 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:35.630 09:51:23 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:35.630 09:51:23 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:35.890 09:51:23 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:36.149 [ 00:19:36.149 { 00:19:36.149 "name": "ftl0", 00:19:36.149 "aliases": [ 00:19:36.149 "7839b8c0-c26e-4e3f-8cc1-f17ca740b133" 00:19:36.149 ], 00:19:36.149 "product_name": "FTL disk", 00:19:36.149 "block_size": 4096, 00:19:36.149 "num_blocks": 23592960, 00:19:36.149 "uuid": "7839b8c0-c26e-4e3f-8cc1-f17ca740b133", 00:19:36.149 "assigned_rate_limits": { 00:19:36.149 "rw_ios_per_sec": 0, 00:19:36.149 "rw_mbytes_per_sec": 0, 00:19:36.149 "r_mbytes_per_sec": 0, 00:19:36.149 "w_mbytes_per_sec": 0 00:19:36.149 }, 00:19:36.149 "claimed": false, 00:19:36.149 "zoned": false, 00:19:36.149 "supported_io_types": { 00:19:36.149 "read": true, 00:19:36.149 "write": true, 00:19:36.149 "unmap": true, 00:19:36.149 "flush": true, 00:19:36.149 "reset": false, 00:19:36.149 "nvme_admin": false, 00:19:36.149 "nvme_io": false, 00:19:36.149 "nvme_io_md": false, 00:19:36.149 "write_zeroes": true, 00:19:36.149 "zcopy": false, 00:19:36.149 "get_zone_info": false, 00:19:36.149 "zone_management": false, 00:19:36.149 "zone_append": false, 00:19:36.149 "compare": false, 00:19:36.149 "compare_and_write": false, 00:19:36.149 "abort": false, 00:19:36.149 "seek_hole": false, 00:19:36.149 "seek_data": false, 00:19:36.149 "copy": false, 00:19:36.149 "nvme_iov_md": false 00:19:36.149 }, 00:19:36.149 "driver_specific": { 00:19:36.149 "ftl": { 00:19:36.149 "base_bdev": "cba7f406-46f8-4a91-abba-1faeb9392aff", 00:19:36.149 "cache": "nvc0n1p0" 00:19:36.149 } 00:19:36.149 } 00:19:36.149 } 00:19:36.149 ] 00:19:36.149 09:51:23 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:36.149 09:51:23 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:36.149 09:51:23 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:36.149 09:51:23 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:36.149 09:51:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:36.408 09:51:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:36.408 { 00:19:36.408 "name": "ftl0", 00:19:36.408 "aliases": [ 00:19:36.408 "7839b8c0-c26e-4e3f-8cc1-f17ca740b133" 00:19:36.408 ], 00:19:36.408 "product_name": "FTL disk", 00:19:36.408 "block_size": 4096, 00:19:36.408 "num_blocks": 23592960, 00:19:36.408 "uuid": "7839b8c0-c26e-4e3f-8cc1-f17ca740b133", 00:19:36.408 "assigned_rate_limits": { 00:19:36.408 "rw_ios_per_sec": 0, 00:19:36.408 "rw_mbytes_per_sec": 0, 00:19:36.408 "r_mbytes_per_sec": 0, 00:19:36.408 "w_mbytes_per_sec": 0 00:19:36.408 }, 00:19:36.408 "claimed": false, 00:19:36.408 "zoned": false, 00:19:36.408 "supported_io_types": { 00:19:36.408 "read": true, 00:19:36.408 "write": true, 00:19:36.408 "unmap": true, 00:19:36.408 "flush": true, 00:19:36.408 "reset": false, 00:19:36.408 "nvme_admin": false, 00:19:36.408 "nvme_io": false, 00:19:36.408 "nvme_io_md": false, 00:19:36.408 "write_zeroes": true, 00:19:36.408 "zcopy": false, 00:19:36.408 "get_zone_info": false, 00:19:36.408 "zone_management": false, 00:19:36.408 "zone_append": false, 00:19:36.408 "compare": false, 00:19:36.408 "compare_and_write": false, 00:19:36.408 "abort": false, 00:19:36.408 "seek_hole": false, 00:19:36.409 "seek_data": false, 00:19:36.409 "copy": false, 00:19:36.409 "nvme_iov_md": false 00:19:36.409 }, 00:19:36.409 "driver_specific": { 00:19:36.409 "ftl": { 00:19:36.409 "base_bdev": "cba7f406-46f8-4a91-abba-1faeb9392aff", 00:19:36.409 "cache": "nvc0n1p0" 00:19:36.409 } 00:19:36.409 } 00:19:36.409 } 00:19:36.409 ]' 00:19:36.409 09:51:23 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:36.409 09:51:23 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:36.409 09:51:23 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:36.669 [2024-12-05 09:51:24.148374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.148411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.669 [2024-12-05 09:51:24.148424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:36.669 [2024-12-05 09:51:24.148433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.148465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:36.669 [2024-12-05 09:51:24.150577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.150604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.669 [2024-12-05 09:51:24.150616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:19:36.669 [2024-12-05 09:51:24.150623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.151102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.151121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.669 [2024-12-05 09:51:24.151129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:19:36.669 [2024-12-05 09:51:24.151135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.153885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.153905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.669 [2024-12-05 09:51:24.153914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:19:36.669 [2024-12-05 09:51:24.153930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.159316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.159343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:36.669 [2024-12-05 09:51:24.159352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.348 ms 00:19:36.669 [2024-12-05 09:51:24.159359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.178209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.178243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.669 [2024-12-05 09:51:24.178256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.769 ms 00:19:36.669 [2024-12-05 09:51:24.178262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.190899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.190928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.669 [2024-12-05 09:51:24.190939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.581 ms 00:19:36.669 [2024-12-05 09:51:24.190947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.191128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.191136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.669 [2024-12-05 09:51:24.191144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:36.669 [2024-12-05 09:51:24.191150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.209080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.209108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:36.669 [2024-12-05 09:51:24.209118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.901 ms 00:19:36.669 [2024-12-05 09:51:24.209124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.226727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.226753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:36.669 [2024-12-05 09:51:24.226765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.538 ms 00:19:36.669 [2024-12-05 09:51:24.226771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.244174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.244201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:36.669 [2024-12-05 09:51:24.244210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.351 ms 00:19:36.669 [2024-12-05 09:51:24.244216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.261504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.669 [2024-12-05 09:51:24.261537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:36.669 [2024-12-05 09:51:24.261547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.192 ms 00:19:36.669 [2024-12-05 09:51:24.261552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.669 [2024-12-05 09:51:24.261603] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:36.669 [2024-12-05 09:51:24.261615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:36.669 [2024-12-05 09:51:24.261678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.261999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:36.670 [2024-12-05 09:51:24.262283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:36.671 [2024-12-05 09:51:24.262295] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:36.671 [2024-12-05 09:51:24.262304] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7839b8c0-c26e-4e3f-8cc1-f17ca740b133 00:19:36.671 [2024-12-05 09:51:24.262309] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:36.671 [2024-12-05 09:51:24.262316] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:36.671 [2024-12-05 09:51:24.262321] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:36.671 [2024-12-05 09:51:24.262331] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:36.671 [2024-12-05 09:51:24.262336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:36.671 [2024-12-05 09:51:24.262343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:36.671 [2024-12-05 09:51:24.262350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:36.671 [2024-12-05 09:51:24.262356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:36.671 [2024-12-05 09:51:24.262361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:36.671 [2024-12-05 09:51:24.262370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.671 [2024-12-05 09:51:24.262376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:36.671 [2024-12-05 09:51:24.262383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:19:36.671 [2024-12-05 09:51:24.262389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.671 [2024-12-05 09:51:24.272111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.671 [2024-12-05 09:51:24.272139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:36.671 [2024-12-05 09:51:24.272150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.693 ms 00:19:36.671 [2024-12-05 09:51:24.272156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.671 [2024-12-05 09:51:24.272458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.671 [2024-12-05 09:51:24.272471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:36.671 [2024-12-05 09:51:24.272479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:36.671 [2024-12-05 09:51:24.272485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.307807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.307838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.930 [2024-12-05 09:51:24.307847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.307852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.307942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.307950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.930 [2024-12-05 09:51:24.307957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.307963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.308015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.308023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.930 [2024-12-05 09:51:24.308033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.308039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.308066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.308072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.930 [2024-12-05 09:51:24.308080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.308085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.371803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.371843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.930 [2024-12-05 09:51:24.371852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.371859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.421600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.421640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.930 [2024-12-05 09:51:24.421650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.421656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.421732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.421740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.930 [2024-12-05 09:51:24.421750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.421758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.421802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.421808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.930 [2024-12-05 09:51:24.421815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.421821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.421912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.421920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.930 [2024-12-05 09:51:24.421927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.421934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.421982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.421989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:36.930 [2024-12-05 09:51:24.421996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.422002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.422040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.422046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.930 [2024-12-05 09:51:24.422055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.422061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.422106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.930 [2024-12-05 09:51:24.422113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.930 [2024-12-05 09:51:24.422121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.930 [2024-12-05 09:51:24.422126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.930 [2024-12-05 09:51:24.422277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 273.892 ms, result 0 00:19:36.930 true 00:19:36.930 09:51:24 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76320 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76320 ']' 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76320 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76320 00:19:36.930 killing process with pid 76320 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76320' 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76320 00:19:36.930 09:51:24 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76320 00:19:43.506 09:51:30 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:44.078 65536+0 records in 00:19:44.078 65536+0 records out 00:19:44.078 268435456 bytes (268 MB, 256 MiB) copied, 1.09386 s, 245 MB/s 00:19:44.078 09:51:31 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.078 [2024-12-05 09:51:31.514673] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:44.078 [2024-12-05 09:51:31.515576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76496 ] 00:19:44.078 [2024-12-05 09:51:31.690005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.339 [2024-12-05 09:51:31.766880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.602 [2024-12-05 09:51:31.977566] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.602 [2024-12-05 09:51:31.977614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.602 [2024-12-05 09:51:32.126192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.126231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:44.602 [2024-12-05 09:51:32.126241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:44.602 [2024-12-05 09:51:32.126247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.128326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.128358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.602 [2024-12-05 09:51:32.128366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:19:44.602 [2024-12-05 09:51:32.128372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.128429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:44.602 [2024-12-05 09:51:32.128937] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:44.602 [2024-12-05 09:51:32.128958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.128965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.602 [2024-12-05 09:51:32.128972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:19:44.602 [2024-12-05 09:51:32.128977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.129942] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:44.602 [2024-12-05 09:51:32.139651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.139682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:44.602 [2024-12-05 09:51:32.139690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.710 ms 00:19:44.602 [2024-12-05 09:51:32.139696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.139763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.139772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:44.602 [2024-12-05 09:51:32.139779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:44.602 [2024-12-05 09:51:32.139785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.144217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.144242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.602 [2024-12-05 09:51:32.144250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.403 ms 00:19:44.602 [2024-12-05 09:51:32.144255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.144340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.144347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.602 [2024-12-05 09:51:32.144354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:44.602 [2024-12-05 09:51:32.144360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.144380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.144386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:44.602 [2024-12-05 09:51:32.144392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:44.602 [2024-12-05 09:51:32.144398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.144412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:44.602 [2024-12-05 09:51:32.147147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.147171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.602 [2024-12-05 09:51:32.147179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:19:44.602 [2024-12-05 09:51:32.147184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.147213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.147220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:44.602 [2024-12-05 09:51:32.147226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:44.602 [2024-12-05 09:51:32.147232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.147246] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:44.602 [2024-12-05 09:51:32.147261] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:44.602 [2024-12-05 09:51:32.147287] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:44.602 [2024-12-05 09:51:32.147298] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:44.602 [2024-12-05 09:51:32.147376] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:44.602 [2024-12-05 09:51:32.147385] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:44.602 [2024-12-05 09:51:32.147393] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:44.602 [2024-12-05 09:51:32.147402] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:44.602 [2024-12-05 09:51:32.147408] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:44.602 [2024-12-05 09:51:32.147414] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:44.602 [2024-12-05 09:51:32.147420] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:44.602 [2024-12-05 09:51:32.147426] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:44.602 [2024-12-05 09:51:32.147431] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:44.602 [2024-12-05 09:51:32.147437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.147443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:44.602 [2024-12-05 09:51:32.147448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:19:44.602 [2024-12-05 09:51:32.147453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.147529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.602 [2024-12-05 09:51:32.147539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:44.602 [2024-12-05 09:51:32.147545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:44.602 [2024-12-05 09:51:32.147551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.602 [2024-12-05 09:51:32.147627] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:44.602 [2024-12-05 09:51:32.147635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:44.602 [2024-12-05 09:51:32.147641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.602 [2024-12-05 09:51:32.147647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.602 [2024-12-05 09:51:32.147653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:44.602 [2024-12-05 09:51:32.147659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:44.602 [2024-12-05 09:51:32.147664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:44.602 [2024-12-05 09:51:32.147670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:44.602 [2024-12-05 09:51:32.147675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:44.602 [2024-12-05 09:51:32.147680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.602 [2024-12-05 09:51:32.147685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:44.602 [2024-12-05 09:51:32.147694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:44.602 [2024-12-05 09:51:32.147699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.602 [2024-12-05 09:51:32.147705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:44.602 [2024-12-05 09:51:32.147710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:44.602 [2024-12-05 09:51:32.147715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.602 [2024-12-05 09:51:32.147720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:44.602 [2024-12-05 09:51:32.147725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:44.602 [2024-12-05 09:51:32.147730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.602 [2024-12-05 09:51:32.147735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:44.603 [2024-12-05 09:51:32.147740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:44.603 [2024-12-05 09:51:32.147745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.603 [2024-12-05 09:51:32.147750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:44.603 [2024-12-05 09:51:32.147755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:44.603 [2024-12-05 09:51:32.147760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.603 [2024-12-05 09:51:32.147765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:44.603 [2024-12-05 09:51:32.147770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:44.603 [2024-12-05 09:51:32.147774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.603 [2024-12-05 09:51:32.147780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:44.603 [2024-12-05 09:51:32.147784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:44.603 [2024-12-05 09:51:32.147789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.603 [2024-12-05 09:51:32.147794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:44.603 [2024-12-05 09:51:32.147799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:44.603 [2024-12-05 09:51:32.147803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.603 [2024-12-05 09:51:32.147808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:44.603 [2024-12-05 09:51:32.147813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:44.603 [2024-12-05 09:51:32.147817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.603 [2024-12-05 09:51:32.147823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:44.603 [2024-12-05 09:51:32.147827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:44.603 [2024-12-05 09:51:32.147832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.603 [2024-12-05 09:51:32.147837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:44.603 [2024-12-05 09:51:32.147843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:44.603 [2024-12-05 09:51:32.147847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.603 [2024-12-05 09:51:32.147852] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:44.603 [2024-12-05 09:51:32.147858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:44.603 [2024-12-05 09:51:32.147866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.603 [2024-12-05 09:51:32.147872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.603 [2024-12-05 09:51:32.147878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:44.603 [2024-12-05 09:51:32.147883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:44.603 [2024-12-05 09:51:32.147889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:44.603 [2024-12-05 09:51:32.147895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:44.603 [2024-12-05 09:51:32.147908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:44.603 [2024-12-05 09:51:32.147913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:44.603 [2024-12-05 09:51:32.147920] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:44.603 [2024-12-05 09:51:32.147927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.603 [2024-12-05 09:51:32.147934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:44.603 [2024-12-05 09:51:32.147940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:44.603 [2024-12-05 09:51:32.147945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:44.603 [2024-12-05 09:51:32.147950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:44.603 [2024-12-05 09:51:32.147956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:44.603 [2024-12-05 09:51:32.147962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:44.603 [2024-12-05 09:51:32.147967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:44.603 [2024-12-05 09:51:32.147972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:44.603 [2024-12-05 09:51:32.147978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:44.603 [2024-12-05 09:51:32.147983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:44.603 [2024-12-05 09:51:32.147988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:44.603 [2024-12-05 09:51:32.147994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:44.603 [2024-12-05 09:51:32.147999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:44.603 [2024-12-05 09:51:32.148005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:44.603 [2024-12-05 09:51:32.148010] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:44.603 [2024-12-05 09:51:32.148017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.603 [2024-12-05 09:51:32.148024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:44.603 [2024-12-05 09:51:32.148030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:44.603 [2024-12-05 09:51:32.148035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:44.603 [2024-12-05 09:51:32.148040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:44.603 [2024-12-05 09:51:32.148046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.603 [2024-12-05 09:51:32.148053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:44.603 [2024-12-05 09:51:32.148060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:19:44.603 [2024-12-05 09:51:32.148066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.603 [2024-12-05 09:51:32.168943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.603 [2024-12-05 09:51:32.168975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.603 [2024-12-05 09:51:32.168983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.835 ms 00:19:44.603 [2024-12-05 09:51:32.168990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.603 [2024-12-05 09:51:32.169083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.603 [2024-12-05 09:51:32.169091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:44.603 [2024-12-05 09:51:32.169098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:44.603 [2024-12-05 09:51:32.169104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.603 [2024-12-05 09:51:32.215298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.603 [2024-12-05 09:51:32.215332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:44.603 [2024-12-05 09:51:32.215343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.177 ms 00:19:44.603 [2024-12-05 09:51:32.215349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.603 [2024-12-05 09:51:32.215410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.603 [2024-12-05 09:51:32.215419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.603 [2024-12-05 09:51:32.215426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:44.603 [2024-12-05 09:51:32.215432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.603 [2024-12-05 09:51:32.215730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.603 [2024-12-05 09:51:32.215744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.603 [2024-12-05 09:51:32.215751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:44.603 [2024-12-05 09:51:32.215761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.603 [2024-12-05 09:51:32.215868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.603 [2024-12-05 09:51:32.215923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.603 [2024-12-05 09:51:32.215931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:44.603 [2024-12-05 09:51:32.215937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.603 [2024-12-05 09:51:32.226755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.603 [2024-12-05 09:51:32.226780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.603 [2024-12-05 09:51:32.226788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.802 ms 00:19:44.603 [2024-12-05 09:51:32.226794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.865 [2024-12-05 09:51:32.236995] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:44.865 [2024-12-05 09:51:32.237025] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:44.865 [2024-12-05 09:51:32.237035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.865 [2024-12-05 09:51:32.237041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:44.865 [2024-12-05 09:51:32.237048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.154 ms 00:19:44.865 [2024-12-05 09:51:32.237054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.865 [2024-12-05 09:51:32.255877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.865 [2024-12-05 09:51:32.255921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:44.865 [2024-12-05 09:51:32.255931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.766 ms 00:19:44.865 [2024-12-05 09:51:32.255937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.865 [2024-12-05 09:51:32.265052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.865 [2024-12-05 09:51:32.265080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:44.865 [2024-12-05 09:51:32.265088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.049 ms 00:19:44.865 [2024-12-05 09:51:32.265093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.865 [2024-12-05 09:51:32.274116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.865 [2024-12-05 09:51:32.274142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:44.865 [2024-12-05 09:51:32.274149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.979 ms 00:19:44.865 [2024-12-05 09:51:32.274154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.865 [2024-12-05 09:51:32.274630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.865 [2024-12-05 09:51:32.274650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:44.865 [2024-12-05 09:51:32.274658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:19:44.865 [2024-12-05 09:51:32.274664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.865 [2024-12-05 09:51:32.318462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.866 [2024-12-05 09:51:32.318500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:44.866 [2024-12-05 09:51:32.318516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.781 ms 00:19:44.866 [2024-12-05 09:51:32.318523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.866 [2024-12-05 09:51:32.326330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:44.866 [2024-12-05 09:51:32.337918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.866 [2024-12-05 09:51:32.337947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:44.866 [2024-12-05 09:51:32.337957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.323 ms 00:19:44.866 [2024-12-05 09:51:32.337963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.866 [2024-12-05 09:51:32.338039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.866 [2024-12-05 09:51:32.338048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:44.866 [2024-12-05 09:51:32.338055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:44.866 [2024-12-05 09:51:32.338060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.866 [2024-12-05 09:51:32.338097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.866 [2024-12-05 09:51:32.338104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:44.866 [2024-12-05 09:51:32.338111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:44.866 [2024-12-05 09:51:32.338117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.866 [2024-12-05 09:51:32.338143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.866 [2024-12-05 09:51:32.338151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:44.866 [2024-12-05 09:51:32.338157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:44.866 [2024-12-05 09:51:32.338162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.866 [2024-12-05 09:51:32.338186] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:44.866 [2024-12-05 09:51:32.338193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.866 [2024-12-05 09:51:32.338199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:44.866 [2024-12-05 09:51:32.338205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:44.866 [2024-12-05 09:51:32.338211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.866 [2024-12-05 09:51:32.356893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.866 [2024-12-05 09:51:32.356923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:44.866 [2024-12-05 09:51:32.356931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.668 ms 00:19:44.866 [2024-12-05 09:51:32.356937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.866 [2024-12-05 09:51:32.357005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.866 [2024-12-05 09:51:32.357013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:44.866 [2024-12-05 09:51:32.357020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:44.866 [2024-12-05 09:51:32.357026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.866 [2024-12-05 09:51:32.358053] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.866 [2024-12-05 09:51:32.360327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 231.644 ms, result 0 00:19:44.866 [2024-12-05 09:51:32.361287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:44.866 [2024-12-05 09:51:32.371939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.808  [2024-12-05T09:51:34.380Z] Copying: 22/256 [MB] (22 MBps) [2024-12-05T09:51:35.773Z] Copying: 47/256 [MB] (25 MBps) [2024-12-05T09:51:36.717Z] Copying: 84/256 [MB] (37 MBps) [2024-12-05T09:51:37.661Z] Copying: 131/256 [MB] (46 MBps) [2024-12-05T09:51:38.605Z] Copying: 173/256 [MB] (42 MBps) [2024-12-05T09:51:39.550Z] Copying: 193/256 [MB] (19 MBps) [2024-12-05T09:51:40.494Z] Copying: 206/256 [MB] (13 MBps) [2024-12-05T09:51:41.463Z] Copying: 218/256 [MB] (11 MBps) [2024-12-05T09:51:42.407Z] Copying: 228/256 [MB] (10 MBps) [2024-12-05T09:51:43.405Z] Copying: 238/256 [MB] (10 MBps) [2024-12-05T09:51:43.975Z] Copying: 254784/262144 [kB] (10112 kBps) [2024-12-05T09:51:43.975Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-05 09:51:43.893491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.346 [2024-12-05 09:51:43.903586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.346 [2024-12-05 09:51:43.903634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:56.346 [2024-12-05 09:51:43.903649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.346 [2024-12-05 09:51:43.903665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.346 [2024-12-05 09:51:43.903689] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:56.346 [2024-12-05 09:51:43.906646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.346 [2024-12-05 09:51:43.906690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:56.346 [2024-12-05 09:51:43.906701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:19:56.346 [2024-12-05 09:51:43.906709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.346 [2024-12-05 09:51:43.910009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.346 [2024-12-05 09:51:43.910057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:56.346 [2024-12-05 09:51:43.910068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:19:56.346 [2024-12-05 09:51:43.910077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.346 [2024-12-05 09:51:43.918883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.346 [2024-12-05 09:51:43.918936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:56.346 [2024-12-05 09:51:43.918947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.788 ms 00:19:56.346 [2024-12-05 09:51:43.918955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.346 [2024-12-05 09:51:43.925896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.346 [2024-12-05 09:51:43.925939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:56.346 [2024-12-05 09:51:43.925951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:19:56.346 [2024-12-05 09:51:43.925959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.346 [2024-12-05 09:51:43.951719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.346 [2024-12-05 09:51:43.951770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:56.346 [2024-12-05 09:51:43.951782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.698 ms 00:19:56.346 [2024-12-05 09:51:43.951790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.346 [2024-12-05 09:51:43.968322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.346 [2024-12-05 09:51:43.968375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:56.346 [2024-12-05 09:51:43.968390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.483 ms 00:19:56.346 [2024-12-05 09:51:43.968399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.346 [2024-12-05 09:51:43.968575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.346 [2024-12-05 09:51:43.968589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:56.346 [2024-12-05 09:51:43.968598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:56.346 [2024-12-05 09:51:43.968615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.607 [2024-12-05 09:51:43.994334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.607 [2024-12-05 09:51:43.994380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:56.607 [2024-12-05 09:51:43.994393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.701 ms 00:19:56.607 [2024-12-05 09:51:43.994400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.607 [2024-12-05 09:51:44.019957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.607 [2024-12-05 09:51:44.020001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:56.608 [2024-12-05 09:51:44.020012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.498 ms 00:19:56.608 [2024-12-05 09:51:44.020020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.045056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.608 [2024-12-05 09:51:44.045101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:56.608 [2024-12-05 09:51:44.045111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.977 ms 00:19:56.608 [2024-12-05 09:51:44.045118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.070332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.608 [2024-12-05 09:51:44.070377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:56.608 [2024-12-05 09:51:44.070387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.118 ms 00:19:56.608 [2024-12-05 09:51:44.070395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.070440] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:56.608 [2024-12-05 09:51:44.070455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.070996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:56.608 [2024-12-05 09:51:44.071278] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:56.608 [2024-12-05 09:51:44.071287] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7839b8c0-c26e-4e3f-8cc1-f17ca740b133 00:19:56.608 [2024-12-05 09:51:44.071295] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:56.608 [2024-12-05 09:51:44.071304] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:56.608 [2024-12-05 09:51:44.071311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:56.608 [2024-12-05 09:51:44.071319] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:56.608 [2024-12-05 09:51:44.071327] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:56.608 [2024-12-05 09:51:44.071334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:56.608 [2024-12-05 09:51:44.071346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:56.608 [2024-12-05 09:51:44.071353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:56.608 [2024-12-05 09:51:44.071359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:56.608 [2024-12-05 09:51:44.071366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.608 [2024-12-05 09:51:44.071377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:56.608 [2024-12-05 09:51:44.071386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:19:56.608 [2024-12-05 09:51:44.071394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.084884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.608 [2024-12-05 09:51:44.084925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:56.608 [2024-12-05 09:51:44.084937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.460 ms 00:19:56.608 [2024-12-05 09:51:44.084944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.085346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.608 [2024-12-05 09:51:44.085364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:56.608 [2024-12-05 09:51:44.085374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:19:56.608 [2024-12-05 09:51:44.085385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.124313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.608 [2024-12-05 09:51:44.124361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.608 [2024-12-05 09:51:44.124372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.608 [2024-12-05 09:51:44.124381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.124466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.608 [2024-12-05 09:51:44.124475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.608 [2024-12-05 09:51:44.124484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.608 [2024-12-05 09:51:44.124492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.124559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.608 [2024-12-05 09:51:44.124572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.608 [2024-12-05 09:51:44.124581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.608 [2024-12-05 09:51:44.124590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.124607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.608 [2024-12-05 09:51:44.124618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.608 [2024-12-05 09:51:44.124626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.608 [2024-12-05 09:51:44.124634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.608 [2024-12-05 09:51:44.210152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.609 [2024-12-05 09:51:44.210211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.609 [2024-12-05 09:51:44.210224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.609 [2024-12-05 09:51:44.210233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.869 [2024-12-05 09:51:44.279663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.869 [2024-12-05 09:51:44.279718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.869 [2024-12-05 09:51:44.279730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.869 [2024-12-05 09:51:44.279740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.869 [2024-12-05 09:51:44.279797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.869 [2024-12-05 09:51:44.279807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.869 [2024-12-05 09:51:44.279816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.870 [2024-12-05 09:51:44.279824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.870 [2024-12-05 09:51:44.279858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.870 [2024-12-05 09:51:44.279867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.870 [2024-12-05 09:51:44.279883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.870 [2024-12-05 09:51:44.279894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.870 [2024-12-05 09:51:44.280024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.870 [2024-12-05 09:51:44.280037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.870 [2024-12-05 09:51:44.280047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.870 [2024-12-05 09:51:44.280055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.870 [2024-12-05 09:51:44.280089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.870 [2024-12-05 09:51:44.280099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:56.870 [2024-12-05 09:51:44.280109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.870 [2024-12-05 09:51:44.280121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.870 [2024-12-05 09:51:44.280164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.870 [2024-12-05 09:51:44.280173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.870 [2024-12-05 09:51:44.280182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.870 [2024-12-05 09:51:44.280191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.870 [2024-12-05 09:51:44.280238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.870 [2024-12-05 09:51:44.280249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.870 [2024-12-05 09:51:44.280261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.870 [2024-12-05 09:51:44.280269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.870 [2024-12-05 09:51:44.280426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.827 ms, result 0 00:19:57.812 00:19:57.812 00:19:57.812 09:51:45 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76643 00:19:57.812 09:51:45 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76643 00:19:57.812 09:51:45 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:57.812 09:51:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76643 ']' 00:19:57.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.812 09:51:45 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.812 09:51:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.812 09:51:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.813 09:51:45 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.813 09:51:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:57.813 [2024-12-05 09:51:45.250599] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:57.813 [2024-12-05 09:51:45.250915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76643 ] 00:19:57.813 [2024-12-05 09:51:45.409828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.073 [2024-12-05 09:51:45.483451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.642 09:51:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.642 09:51:46 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:58.642 09:51:46 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:58.902 [2024-12-05 09:51:46.288207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.902 [2024-12-05 09:51:46.288254] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.902 [2024-12-05 09:51:46.456124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.456159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.902 [2024-12-05 09:51:46.456171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:58.902 [2024-12-05 09:51:46.456178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.458285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.458315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.902 [2024-12-05 09:51:46.458324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:19:58.902 [2024-12-05 09:51:46.458330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.458388] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.902 [2024-12-05 09:51:46.458915] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.902 [2024-12-05 09:51:46.459028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.459037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.902 [2024-12-05 09:51:46.459045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:19:58.902 [2024-12-05 09:51:46.459051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.460036] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:58.902 [2024-12-05 09:51:46.470216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.470325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:58.902 [2024-12-05 09:51:46.470338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.188 ms 00:19:58.902 [2024-12-05 09:51:46.470345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.470412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.470422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:58.902 [2024-12-05 09:51:46.470428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:58.902 [2024-12-05 09:51:46.470435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.474669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.474698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.902 [2024-12-05 09:51:46.474705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.197 ms 00:19:58.902 [2024-12-05 09:51:46.474712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.474784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.474794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.902 [2024-12-05 09:51:46.474800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:58.902 [2024-12-05 09:51:46.474809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.474826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.474833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.902 [2024-12-05 09:51:46.474839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.902 [2024-12-05 09:51:46.474846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.474862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.902 [2024-12-05 09:51:46.477599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.477621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.902 [2024-12-05 09:51:46.477630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:19:58.902 [2024-12-05 09:51:46.477635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.477666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.477673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.902 [2024-12-05 09:51:46.477680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:58.902 [2024-12-05 09:51:46.477687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.477702] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:58.902 [2024-12-05 09:51:46.477717] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:58.902 [2024-12-05 09:51:46.477750] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:58.902 [2024-12-05 09:51:46.477761] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:58.902 [2024-12-05 09:51:46.477841] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.902 [2024-12-05 09:51:46.477849] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.902 [2024-12-05 09:51:46.477860] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:58.902 [2024-12-05 09:51:46.477868] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.902 [2024-12-05 09:51:46.477876] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.902 [2024-12-05 09:51:46.477882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.902 [2024-12-05 09:51:46.477889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.902 [2024-12-05 09:51:46.477895] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.902 [2024-12-05 09:51:46.477902] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.902 [2024-12-05 09:51:46.477908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.477915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.902 [2024-12-05 09:51:46.477921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:19:58.902 [2024-12-05 09:51:46.477927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.477995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.902 [2024-12-05 09:51:46.478002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.902 [2024-12-05 09:51:46.478008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:58.902 [2024-12-05 09:51:46.478015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.902 [2024-12-05 09:51:46.478089] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.902 [2024-12-05 09:51:46.478098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.903 [2024-12-05 09:51:46.478104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.903 [2024-12-05 09:51:46.478124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.903 [2024-12-05 09:51:46.478143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.903 [2024-12-05 09:51:46.478155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:58.903 [2024-12-05 09:51:46.478161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:58.903 [2024-12-05 09:51:46.478166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.903 [2024-12-05 09:51:46.478172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:58.903 [2024-12-05 09:51:46.478177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:58.903 [2024-12-05 09:51:46.478185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:58.903 [2024-12-05 09:51:46.478197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:58.903 [2024-12-05 09:51:46.478217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:58.903 [2024-12-05 09:51:46.478236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:58.903 [2024-12-05 09:51:46.478252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:58.903 [2024-12-05 09:51:46.478270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:58.903 [2024-12-05 09:51:46.478286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.903 [2024-12-05 09:51:46.478297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:58.903 [2024-12-05 09:51:46.478303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:58.903 [2024-12-05 09:51:46.478308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.903 [2024-12-05 09:51:46.478314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:58.903 [2024-12-05 09:51:46.478319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:58.903 [2024-12-05 09:51:46.478326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:58.903 [2024-12-05 09:51:46.478337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:58.903 [2024-12-05 09:51:46.478343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:58.903 [2024-12-05 09:51:46.478356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:58.903 [2024-12-05 09:51:46.478362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.903 [2024-12-05 09:51:46.478375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:58.903 [2024-12-05 09:51:46.478381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:58.903 [2024-12-05 09:51:46.478388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:58.903 [2024-12-05 09:51:46.478393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:58.903 [2024-12-05 09:51:46.478399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:58.903 [2024-12-05 09:51:46.478404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:58.903 [2024-12-05 09:51:46.478411] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:58.903 [2024-12-05 09:51:46.478418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.903 [2024-12-05 09:51:46.478428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:58.903 [2024-12-05 09:51:46.478434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:58.903 [2024-12-05 09:51:46.478440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:58.903 [2024-12-05 09:51:46.478445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:58.903 [2024-12-05 09:51:46.478452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:58.903 [2024-12-05 09:51:46.478457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:58.903 [2024-12-05 09:51:46.478464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:58.903 [2024-12-05 09:51:46.478469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:58.903 [2024-12-05 09:51:46.478476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:58.903 [2024-12-05 09:51:46.478481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:58.903 [2024-12-05 09:51:46.478487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:58.903 [2024-12-05 09:51:46.478493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:58.903 [2024-12-05 09:51:46.478499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:58.903 [2024-12-05 09:51:46.478504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:58.903 [2024-12-05 09:51:46.478524] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:58.903 [2024-12-05 09:51:46.478530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.903 [2024-12-05 09:51:46.478540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:58.903 [2024-12-05 09:51:46.478546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:58.903 [2024-12-05 09:51:46.478552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:58.903 [2024-12-05 09:51:46.478558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:58.903 [2024-12-05 09:51:46.478565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.903 [2024-12-05 09:51:46.478571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:58.903 [2024-12-05 09:51:46.478578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:19:58.903 [2024-12-05 09:51:46.478585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.903 [2024-12-05 09:51:46.499079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.903 [2024-12-05 09:51:46.499107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.903 [2024-12-05 09:51:46.499117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.451 ms 00:19:58.903 [2024-12-05 09:51:46.499125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.903 [2024-12-05 09:51:46.499215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.903 [2024-12-05 09:51:46.499222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:58.903 [2024-12-05 09:51:46.499230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:58.903 [2024-12-05 09:51:46.499236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.903 [2024-12-05 09:51:46.522860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.903 [2024-12-05 09:51:46.522896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.903 [2024-12-05 09:51:46.522905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.607 ms 00:19:58.903 [2024-12-05 09:51:46.522911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.903 [2024-12-05 09:51:46.522955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.903 [2024-12-05 09:51:46.522962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.903 [2024-12-05 09:51:46.522970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:58.903 [2024-12-05 09:51:46.522975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.903 [2024-12-05 09:51:46.523244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.903 [2024-12-05 09:51:46.523255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.903 [2024-12-05 09:51:46.523264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:58.903 [2024-12-05 09:51:46.523270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.903 [2024-12-05 09:51:46.523367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.903 [2024-12-05 09:51:46.523373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.903 [2024-12-05 09:51:46.523380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:58.904 [2024-12-05 09:51:46.523386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.534794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.534818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.164 [2024-12-05 09:51:46.534828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.391 ms 00:19:59.164 [2024-12-05 09:51:46.534834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.561691] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:59.164 [2024-12-05 09:51:46.561721] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:59.164 [2024-12-05 09:51:46.561734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.561741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:59.164 [2024-12-05 09:51:46.561750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.827 ms 00:19:59.164 [2024-12-05 09:51:46.561760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.580160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.580188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:59.164 [2024-12-05 09:51:46.580198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.340 ms 00:19:59.164 [2024-12-05 09:51:46.580204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.588928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.588953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:59.164 [2024-12-05 09:51:46.588964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.680 ms 00:19:59.164 [2024-12-05 09:51:46.588969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.597497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.597531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:59.164 [2024-12-05 09:51:46.597541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.486 ms 00:19:59.164 [2024-12-05 09:51:46.597546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.598006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.598019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:59.164 [2024-12-05 09:51:46.598027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:19:59.164 [2024-12-05 09:51:46.598033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.641480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.641524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:59.164 [2024-12-05 09:51:46.641536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.428 ms 00:19:59.164 [2024-12-05 09:51:46.641542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.649467] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:59.164 [2024-12-05 09:51:46.660652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.660685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.164 [2024-12-05 09:51:46.660696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.050 ms 00:19:59.164 [2024-12-05 09:51:46.660704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.660774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.660783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:59.164 [2024-12-05 09:51:46.660790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:59.164 [2024-12-05 09:51:46.660797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.660835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.660843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:59.164 [2024-12-05 09:51:46.660849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:59.164 [2024-12-05 09:51:46.660858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.660876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.660884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:59.164 [2024-12-05 09:51:46.660889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:59.164 [2024-12-05 09:51:46.660898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.660923] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:59.164 [2024-12-05 09:51:46.660933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.660940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:59.164 [2024-12-05 09:51:46.660948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:59.164 [2024-12-05 09:51:46.660953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.678458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.678485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:59.164 [2024-12-05 09:51:46.678496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.485 ms 00:19:59.164 [2024-12-05 09:51:46.678502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.678582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.164 [2024-12-05 09:51:46.678591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:59.164 [2024-12-05 09:51:46.678599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:59.164 [2024-12-05 09:51:46.678607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.164 [2024-12-05 09:51:46.679213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.164 [2024-12-05 09:51:46.681522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.870 ms, result 0 00:19:59.164 [2024-12-05 09:51:46.682452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:59.164 Some configs were skipped because the RPC state that can call them passed over. 00:19:59.164 09:51:46 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:59.425 [2024-12-05 09:51:46.910370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.425 [2024-12-05 09:51:46.910477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:59.425 [2024-12-05 09:51:46.910586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:19:59.425 [2024-12-05 09:51:46.910614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.425 [2024-12-05 09:51:46.910655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.483 ms, result 0 00:19:59.425 true 00:19:59.425 09:51:46 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:59.685 [2024-12-05 09:51:47.110519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.685 [2024-12-05 09:51:47.110611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:59.685 [2024-12-05 09:51:47.110653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:19:59.685 [2024-12-05 09:51:47.110670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.685 [2024-12-05 09:51:47.110710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.351 ms, result 0 00:19:59.685 true 00:19:59.685 09:51:47 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76643 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76643 ']' 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76643 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76643 00:19:59.685 killing process with pid 76643 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76643' 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76643 00:19:59.685 09:51:47 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76643 00:20:00.256 [2024-12-05 09:51:47.680795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.680839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:00.256 [2024-12-05 09:51:47.680850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.256 [2024-12-05 09:51:47.680857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.680877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:00.256 [2024-12-05 09:51:47.682965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.682990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:00.256 [2024-12-05 09:51:47.683002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:20:00.256 [2024-12-05 09:51:47.683008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.683229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.683237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:00.256 [2024-12-05 09:51:47.683244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:20:00.256 [2024-12-05 09:51:47.683250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.686462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.686486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:00.256 [2024-12-05 09:51:47.686497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.195 ms 00:20:00.256 [2024-12-05 09:51:47.686503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.691792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.691935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:00.256 [2024-12-05 09:51:47.691953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.235 ms 00:20:00.256 [2024-12-05 09:51:47.691960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.699267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.699366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:00.256 [2024-12-05 09:51:47.699380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.262 ms 00:20:00.256 [2024-12-05 09:51:47.699386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.705589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.705675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:00.256 [2024-12-05 09:51:47.705724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:20:00.256 [2024-12-05 09:51:47.705742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.705850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.705869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:00.256 [2024-12-05 09:51:47.705886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:00.256 [2024-12-05 09:51:47.705928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.713430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.713530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:00.256 [2024-12-05 09:51:47.713580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.473 ms 00:20:00.256 [2024-12-05 09:51:47.713596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.720840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.720987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:00.256 [2024-12-05 09:51:47.721044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.207 ms 00:20:00.256 [2024-12-05 09:51:47.721084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.728300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.728404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:00.256 [2024-12-05 09:51:47.728453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.966 ms 00:20:00.256 [2024-12-05 09:51:47.728471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.735384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.256 [2024-12-05 09:51:47.735466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:00.256 [2024-12-05 09:51:47.735508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.845 ms 00:20:00.256 [2024-12-05 09:51:47.735533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.256 [2024-12-05 09:51:47.735567] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:00.256 [2024-12-05 09:51:47.735615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.735978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:00.256 [2024-12-05 09:51:47.736002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.736990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.737999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.738938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:00.257 [2024-12-05 09:51:47.739013] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:00.257 [2024-12-05 09:51:47.739025] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7839b8c0-c26e-4e3f-8cc1-f17ca740b133 00:20:00.257 [2024-12-05 09:51:47.739033] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:00.257 [2024-12-05 09:51:47.739041] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:00.257 [2024-12-05 09:51:47.739047] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:00.257 [2024-12-05 09:51:47.739054] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:00.257 [2024-12-05 09:51:47.739060] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:00.257 [2024-12-05 09:51:47.739067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:00.257 [2024-12-05 09:51:47.739073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:00.257 [2024-12-05 09:51:47.739079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:00.257 [2024-12-05 09:51:47.739084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:00.257 [2024-12-05 09:51:47.739091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.257 [2024-12-05 09:51:47.739097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:00.257 [2024-12-05 09:51:47.739105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:20:00.257 [2024-12-05 09:51:47.739110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.257 [2024-12-05 09:51:47.748623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.257 [2024-12-05 09:51:47.748647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:00.257 [2024-12-05 09:51:47.748658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.481 ms 00:20:00.257 [2024-12-05 09:51:47.748664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.257 [2024-12-05 09:51:47.748949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.257 [2024-12-05 09:51:47.748966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:00.257 [2024-12-05 09:51:47.748977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:00.257 [2024-12-05 09:51:47.748982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.257 [2024-12-05 09:51:47.783463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.257 [2024-12-05 09:51:47.783489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.257 [2024-12-05 09:51:47.783498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.257 [2024-12-05 09:51:47.783504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.257 [2024-12-05 09:51:47.783597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.257 [2024-12-05 09:51:47.783606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.257 [2024-12-05 09:51:47.783615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.257 [2024-12-05 09:51:47.783621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.258 [2024-12-05 09:51:47.783655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.258 [2024-12-05 09:51:47.783663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.258 [2024-12-05 09:51:47.783671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.258 [2024-12-05 09:51:47.783677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.258 [2024-12-05 09:51:47.783691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.258 [2024-12-05 09:51:47.783698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.258 [2024-12-05 09:51:47.783704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.258 [2024-12-05 09:51:47.783711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.258 [2024-12-05 09:51:47.842309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.258 [2024-12-05 09:51:47.842340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.258 [2024-12-05 09:51:47.842350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.258 [2024-12-05 09:51:47.842356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.519 [2024-12-05 09:51:47.889808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.519 [2024-12-05 09:51:47.889929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.519 [2024-12-05 09:51:47.889944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.519 [2024-12-05 09:51:47.889952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.519 [2024-12-05 09:51:47.890016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.519 [2024-12-05 09:51:47.890023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.519 [2024-12-05 09:51:47.890033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.519 [2024-12-05 09:51:47.890039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.519 [2024-12-05 09:51:47.890063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.519 [2024-12-05 09:51:47.890070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.519 [2024-12-05 09:51:47.890077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.519 [2024-12-05 09:51:47.890083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.519 [2024-12-05 09:51:47.890155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.519 [2024-12-05 09:51:47.890163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.519 [2024-12-05 09:51:47.890170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.519 [2024-12-05 09:51:47.890175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.519 [2024-12-05 09:51:47.890205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.519 [2024-12-05 09:51:47.890212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:00.519 [2024-12-05 09:51:47.890219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.519 [2024-12-05 09:51:47.890225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.519 [2024-12-05 09:51:47.890258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.519 [2024-12-05 09:51:47.890265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.519 [2024-12-05 09:51:47.890273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.519 [2024-12-05 09:51:47.890279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.519 [2024-12-05 09:51:47.890317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.519 [2024-12-05 09:51:47.890325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.519 [2024-12-05 09:51:47.890332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.519 [2024-12-05 09:51:47.890337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.519 [2024-12-05 09:51:47.890442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.630 ms, result 0 00:20:01.091 09:51:48 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:01.091 09:51:48 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:01.091 [2024-12-05 09:51:48.475936] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:01.091 [2024-12-05 09:51:48.476685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76690 ] 00:20:01.091 [2024-12-05 09:51:48.633430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.091 [2024-12-05 09:51:48.712315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.352 [2024-12-05 09:51:48.920805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.352 [2024-12-05 09:51:48.920854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.614 [2024-12-05 09:51:49.072382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.614 [2024-12-05 09:51:49.072417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:01.614 [2024-12-05 09:51:49.072427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:01.614 [2024-12-05 09:51:49.072434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.614 [2024-12-05 09:51:49.074732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.614 [2024-12-05 09:51:49.074765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.614 [2024-12-05 09:51:49.074773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:20:01.614 [2024-12-05 09:51:49.074779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.614 [2024-12-05 09:51:49.074857] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:01.614 [2024-12-05 09:51:49.075370] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:01.614 [2024-12-05 09:51:49.075386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.614 [2024-12-05 09:51:49.075393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.614 [2024-12-05 09:51:49.075400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:01.614 [2024-12-05 09:51:49.075406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.614 [2024-12-05 09:51:49.076400] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:01.614 [2024-12-05 09:51:49.085975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.614 [2024-12-05 09:51:49.086090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:01.614 [2024-12-05 09:51:49.086103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.576 ms 00:20:01.614 [2024-12-05 09:51:49.086109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.614 [2024-12-05 09:51:49.086178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.614 [2024-12-05 09:51:49.086186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:01.614 [2024-12-05 09:51:49.086192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:01.614 [2024-12-05 09:51:49.086198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.614 [2024-12-05 09:51:49.090428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.614 [2024-12-05 09:51:49.090451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.614 [2024-12-05 09:51:49.090459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.200 ms 00:20:01.615 [2024-12-05 09:51:49.090464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.615 [2024-12-05 09:51:49.090556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.615 [2024-12-05 09:51:49.090564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.615 [2024-12-05 09:51:49.090570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:01.615 [2024-12-05 09:51:49.090576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.615 [2024-12-05 09:51:49.090597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.615 [2024-12-05 09:51:49.090604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.615 [2024-12-05 09:51:49.090610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:01.615 [2024-12-05 09:51:49.090616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.615 [2024-12-05 09:51:49.090631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:01.615 [2024-12-05 09:51:49.093332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.615 [2024-12-05 09:51:49.093436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.615 [2024-12-05 09:51:49.093448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:20:01.615 [2024-12-05 09:51:49.093454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.615 [2024-12-05 09:51:49.093485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.615 [2024-12-05 09:51:49.093492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.615 [2024-12-05 09:51:49.093498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:01.615 [2024-12-05 09:51:49.093504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.615 [2024-12-05 09:51:49.093532] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:01.615 [2024-12-05 09:51:49.093547] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:01.615 [2024-12-05 09:51:49.093574] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:01.615 [2024-12-05 09:51:49.093586] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:01.615 [2024-12-05 09:51:49.093665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:01.615 [2024-12-05 09:51:49.093673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.615 [2024-12-05 09:51:49.093681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:01.615 [2024-12-05 09:51:49.093690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.615 [2024-12-05 09:51:49.093697] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.615 [2024-12-05 09:51:49.093703] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:01.615 [2024-12-05 09:51:49.093709] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.615 [2024-12-05 09:51:49.093715] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:01.615 [2024-12-05 09:51:49.093720] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:01.615 [2024-12-05 09:51:49.093727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.615 [2024-12-05 09:51:49.093732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.615 [2024-12-05 09:51:49.093739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:20:01.615 [2024-12-05 09:51:49.093744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.615 [2024-12-05 09:51:49.093810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.615 [2024-12-05 09:51:49.093819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.615 [2024-12-05 09:51:49.093825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:01.615 [2024-12-05 09:51:49.093831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.615 [2024-12-05 09:51:49.093907] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.615 [2024-12-05 09:51:49.093914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.615 [2024-12-05 09:51:49.093921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.615 [2024-12-05 09:51:49.093927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.615 [2024-12-05 09:51:49.093933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.615 [2024-12-05 09:51:49.093938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.615 [2024-12-05 09:51:49.093943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:01.615 [2024-12-05 09:51:49.093949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.615 [2024-12-05 09:51:49.093955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:01.615 [2024-12-05 09:51:49.093960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.615 [2024-12-05 09:51:49.093966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.615 [2024-12-05 09:51:49.093975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:01.615 [2024-12-05 09:51:49.093980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.615 [2024-12-05 09:51:49.093986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.615 [2024-12-05 09:51:49.093991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:01.615 [2024-12-05 09:51:49.093996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.615 [2024-12-05 09:51:49.094006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:01.615 [2024-12-05 09:51:49.094010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.615 [2024-12-05 09:51:49.094021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.615 [2024-12-05 09:51:49.094030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.615 [2024-12-05 09:51:49.094035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.615 [2024-12-05 09:51:49.094045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.615 [2024-12-05 09:51:49.094050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.615 [2024-12-05 09:51:49.094060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.615 [2024-12-05 09:51:49.094066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.615 [2024-12-05 09:51:49.094075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.615 [2024-12-05 09:51:49.094080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.615 [2024-12-05 09:51:49.094091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.615 [2024-12-05 09:51:49.094096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:01.615 [2024-12-05 09:51:49.094100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.615 [2024-12-05 09:51:49.094105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:01.615 [2024-12-05 09:51:49.094110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:01.615 [2024-12-05 09:51:49.094115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:01.615 [2024-12-05 09:51:49.094124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:01.615 [2024-12-05 09:51:49.094130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094135] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.615 [2024-12-05 09:51:49.094142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.615 [2024-12-05 09:51:49.094150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.615 [2024-12-05 09:51:49.094155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.615 [2024-12-05 09:51:49.094161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.615 [2024-12-05 09:51:49.094166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.615 [2024-12-05 09:51:49.094171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.615 [2024-12-05 09:51:49.094176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.615 [2024-12-05 09:51:49.094181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.615 [2024-12-05 09:51:49.094186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.615 [2024-12-05 09:51:49.094192] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.615 [2024-12-05 09:51:49.094199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.615 [2024-12-05 09:51:49.094205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:01.615 [2024-12-05 09:51:49.094211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:01.615 [2024-12-05 09:51:49.094216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:01.615 [2024-12-05 09:51:49.094221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:01.615 [2024-12-05 09:51:49.094227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:01.615 [2024-12-05 09:51:49.094232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:01.615 [2024-12-05 09:51:49.094237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:01.615 [2024-12-05 09:51:49.094243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:01.616 [2024-12-05 09:51:49.094249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:01.616 [2024-12-05 09:51:49.094254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:01.616 [2024-12-05 09:51:49.094259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:01.616 [2024-12-05 09:51:49.094265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:01.616 [2024-12-05 09:51:49.094270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:01.616 [2024-12-05 09:51:49.094275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:01.616 [2024-12-05 09:51:49.094280] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.616 [2024-12-05 09:51:49.094286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.616 [2024-12-05 09:51:49.094292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.616 [2024-12-05 09:51:49.094298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.616 [2024-12-05 09:51:49.094304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.616 [2024-12-05 09:51:49.094310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.616 [2024-12-05 09:51:49.094315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.094323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.616 [2024-12-05 09:51:49.094328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:20:01.616 [2024-12-05 09:51:49.094333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.115068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.115094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.616 [2024-12-05 09:51:49.115102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.689 ms 00:20:01.616 [2024-12-05 09:51:49.115108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.115205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.115214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.616 [2024-12-05 09:51:49.115220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:01.616 [2024-12-05 09:51:49.115227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.153407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.153524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.616 [2024-12-05 09:51:49.153541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.162 ms 00:20:01.616 [2024-12-05 09:51:49.153548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.153606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.153614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.616 [2024-12-05 09:51:49.153621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:01.616 [2024-12-05 09:51:49.153627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.153916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.153929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.616 [2024-12-05 09:51:49.153936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:01.616 [2024-12-05 09:51:49.153946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.154049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.154057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.616 [2024-12-05 09:51:49.154063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:01.616 [2024-12-05 09:51:49.154069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.164860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.164884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.616 [2024-12-05 09:51:49.164892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.777 ms 00:20:01.616 [2024-12-05 09:51:49.164897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.174353] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:01.616 [2024-12-05 09:51:49.174378] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:01.616 [2024-12-05 09:51:49.174386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.174393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:01.616 [2024-12-05 09:51:49.174400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.406 ms 00:20:01.616 [2024-12-05 09:51:49.174406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.192925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.192951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:01.616 [2024-12-05 09:51:49.192960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.472 ms 00:20:01.616 [2024-12-05 09:51:49.192967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.202146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.202170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:01.616 [2024-12-05 09:51:49.202177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.126 ms 00:20:01.616 [2024-12-05 09:51:49.202183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.211057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.211079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:01.616 [2024-12-05 09:51:49.211087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.835 ms 00:20:01.616 [2024-12-05 09:51:49.211092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.616 [2024-12-05 09:51:49.211547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.616 [2024-12-05 09:51:49.211558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.616 [2024-12-05 09:51:49.211566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:01.616 [2024-12-05 09:51:49.211571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.878 [2024-12-05 09:51:49.255206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.878 [2024-12-05 09:51:49.255345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:01.878 [2024-12-05 09:51:49.255360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.617 ms 00:20:01.878 [2024-12-05 09:51:49.255367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.878 [2024-12-05 09:51:49.263014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:01.878 [2024-12-05 09:51:49.274097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.878 [2024-12-05 09:51:49.274227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:01.878 [2024-12-05 09:51:49.274240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.636 ms 00:20:01.878 [2024-12-05 09:51:49.274251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.878 [2024-12-05 09:51:49.274321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.878 [2024-12-05 09:51:49.274329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:01.878 [2024-12-05 09:51:49.274336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:01.878 [2024-12-05 09:51:49.274342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.879 [2024-12-05 09:51:49.274379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.879 [2024-12-05 09:51:49.274386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:01.879 [2024-12-05 09:51:49.274392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:01.879 [2024-12-05 09:51:49.274400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.879 [2024-12-05 09:51:49.274423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.879 [2024-12-05 09:51:49.274430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.879 [2024-12-05 09:51:49.274437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:01.879 [2024-12-05 09:51:49.274443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.879 [2024-12-05 09:51:49.274467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:01.879 [2024-12-05 09:51:49.274474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.879 [2024-12-05 09:51:49.274480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:01.879 [2024-12-05 09:51:49.274486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:01.879 [2024-12-05 09:51:49.274492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.879 [2024-12-05 09:51:49.293415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.879 [2024-12-05 09:51:49.293519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.879 [2024-12-05 09:51:49.293532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.908 ms 00:20:01.879 [2024-12-05 09:51:49.293539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.879 [2024-12-05 09:51:49.293602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.879 [2024-12-05 09:51:49.293610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.879 [2024-12-05 09:51:49.293617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:01.879 [2024-12-05 09:51:49.293623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.879 [2024-12-05 09:51:49.294237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.879 [2024-12-05 09:51:49.296455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.638 ms, result 0 00:20:01.879 [2024-12-05 09:51:49.297839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.879 [2024-12-05 09:51:49.308620] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.822  [2024-12-05T09:51:51.393Z] Copying: 35/256 [MB] (35 MBps) [2024-12-05T09:51:52.338Z] Copying: 49/256 [MB] (14 MBps) [2024-12-05T09:51:53.344Z] Copying: 68/256 [MB] (18 MBps) [2024-12-05T09:51:54.730Z] Copying: 79/256 [MB] (10 MBps) [2024-12-05T09:51:55.670Z] Copying: 93/256 [MB] (14 MBps) [2024-12-05T09:51:56.614Z] Copying: 112/256 [MB] (18 MBps) [2024-12-05T09:51:57.560Z] Copying: 122/256 [MB] (10 MBps) [2024-12-05T09:51:58.506Z] Copying: 138/256 [MB] (16 MBps) [2024-12-05T09:51:59.452Z] Copying: 161/256 [MB] (22 MBps) [2024-12-05T09:52:00.397Z] Copying: 180/256 [MB] (19 MBps) [2024-12-05T09:52:01.343Z] Copying: 192/256 [MB] (11 MBps) [2024-12-05T09:52:02.735Z] Copying: 211/256 [MB] (19 MBps) [2024-12-05T09:52:02.735Z] Copying: 250/256 [MB] (38 MBps) [2024-12-05T09:52:02.735Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-05 09:52:02.500018] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.106 [2024-12-05 09:52:02.507340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.507471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.106 [2024-12-05 09:52:02.507492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.106 [2024-12-05 09:52:02.507499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.507528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.106 [2024-12-05 09:52:02.509645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.509669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.106 [2024-12-05 09:52:02.509678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:20:15.106 [2024-12-05 09:52:02.509685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.509878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.509886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.106 [2024-12-05 09:52:02.509893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:20:15.106 [2024-12-05 09:52:02.509899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.512719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.512738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.106 [2024-12-05 09:52:02.512745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:20:15.106 [2024-12-05 09:52:02.512752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.517946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.518046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.106 [2024-12-05 09:52:02.518058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.180 ms 00:20:15.106 [2024-12-05 09:52:02.518064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.536557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.536660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.106 [2024-12-05 09:52:02.536673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.449 ms 00:20:15.106 [2024-12-05 09:52:02.536679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.548303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.548400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.106 [2024-12-05 09:52:02.548417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.597 ms 00:20:15.106 [2024-12-05 09:52:02.548423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.548527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.548535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.106 [2024-12-05 09:52:02.548548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:15.106 [2024-12-05 09:52:02.548554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.566814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.566909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.106 [2024-12-05 09:52:02.566921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.247 ms 00:20:15.106 [2024-12-05 09:52:02.566926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.585418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.585442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.106 [2024-12-05 09:52:02.585450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.458 ms 00:20:15.106 [2024-12-05 09:52:02.585455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.602519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.602542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.106 [2024-12-05 09:52:02.602550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.038 ms 00:20:15.106 [2024-12-05 09:52:02.602556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.619979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.106 [2024-12-05 09:52:02.620002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.106 [2024-12-05 09:52:02.620010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.377 ms 00:20:15.106 [2024-12-05 09:52:02.620016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.106 [2024-12-05 09:52:02.620042] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.106 [2024-12-05 09:52:02.620053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.106 [2024-12-05 09:52:02.620108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.107 [2024-12-05 09:52:02.620638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.108 [2024-12-05 09:52:02.620665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.108 [2024-12-05 09:52:02.620677] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.108 [2024-12-05 09:52:02.620683] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7839b8c0-c26e-4e3f-8cc1-f17ca740b133 00:20:15.108 [2024-12-05 09:52:02.620690] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.108 [2024-12-05 09:52:02.620696] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.108 [2024-12-05 09:52:02.620701] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.108 [2024-12-05 09:52:02.620707] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.108 [2024-12-05 09:52:02.620712] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.108 [2024-12-05 09:52:02.620718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.108 [2024-12-05 09:52:02.620726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.108 [2024-12-05 09:52:02.620731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.108 [2024-12-05 09:52:02.620736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.108 [2024-12-05 09:52:02.620742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.108 [2024-12-05 09:52:02.620747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.108 [2024-12-05 09:52:02.620753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:20:15.108 [2024-12-05 09:52:02.620759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.108 [2024-12-05 09:52:02.630012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.108 [2024-12-05 09:52:02.630035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.108 [2024-12-05 09:52:02.630043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.240 ms 00:20:15.108 [2024-12-05 09:52:02.630048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.108 [2024-12-05 09:52:02.630325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.108 [2024-12-05 09:52:02.630333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.108 [2024-12-05 09:52:02.630339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:20:15.108 [2024-12-05 09:52:02.630346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.108 [2024-12-05 09:52:02.658009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.108 [2024-12-05 09:52:02.658035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.108 [2024-12-05 09:52:02.658042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.108 [2024-12-05 09:52:02.658051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.108 [2024-12-05 09:52:02.658101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.108 [2024-12-05 09:52:02.658107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.108 [2024-12-05 09:52:02.658114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.108 [2024-12-05 09:52:02.658119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.108 [2024-12-05 09:52:02.658154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.108 [2024-12-05 09:52:02.658161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.108 [2024-12-05 09:52:02.658167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.108 [2024-12-05 09:52:02.658172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.108 [2024-12-05 09:52:02.658187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.108 [2024-12-05 09:52:02.658193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.108 [2024-12-05 09:52:02.658198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.108 [2024-12-05 09:52:02.658204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.108 [2024-12-05 09:52:02.718358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.108 [2024-12-05 09:52:02.718387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.108 [2024-12-05 09:52:02.718397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.108 [2024-12-05 09:52:02.718404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.370 [2024-12-05 09:52:02.768530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.370 [2024-12-05 09:52:02.768559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.370 [2024-12-05 09:52:02.768568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.370 [2024-12-05 09:52:02.768574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.370 [2024-12-05 09:52:02.768611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.370 [2024-12-05 09:52:02.768618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.370 [2024-12-05 09:52:02.768625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.370 [2024-12-05 09:52:02.768631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.370 [2024-12-05 09:52:02.768653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.370 [2024-12-05 09:52:02.768662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.370 [2024-12-05 09:52:02.768669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.370 [2024-12-05 09:52:02.768675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.370 [2024-12-05 09:52:02.768742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.370 [2024-12-05 09:52:02.768750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.370 [2024-12-05 09:52:02.768756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.370 [2024-12-05 09:52:02.768762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.370 [2024-12-05 09:52:02.768786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.370 [2024-12-05 09:52:02.768794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.370 [2024-12-05 09:52:02.768803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.370 [2024-12-05 09:52:02.768809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.370 [2024-12-05 09:52:02.768839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.370 [2024-12-05 09:52:02.768846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.370 [2024-12-05 09:52:02.768852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.370 [2024-12-05 09:52:02.768858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.370 [2024-12-05 09:52:02.768890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.370 [2024-12-05 09:52:02.768899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.370 [2024-12-05 09:52:02.768906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.370 [2024-12-05 09:52:02.768911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.370 [2024-12-05 09:52:02.769013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 261.664 ms, result 0 00:20:15.944 00:20:15.944 00:20:15.944 09:52:03 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:15.944 09:52:03 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:16.517 09:52:03 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.517 [2024-12-05 09:52:03.957688] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:16.517 [2024-12-05 09:52:03.957791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76854 ] 00:20:16.517 [2024-12-05 09:52:04.099285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.779 [2024-12-05 09:52:04.180146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.779 [2024-12-05 09:52:04.389845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.779 [2024-12-05 09:52:04.389897] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.041 [2024-12-05 09:52:04.537691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.537834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.041 [2024-12-05 09:52:04.537849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.041 [2024-12-05 09:52:04.537856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.539946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.539975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.041 [2024-12-05 09:52:04.539982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:20:17.041 [2024-12-05 09:52:04.539988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.540044] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.041 [2024-12-05 09:52:04.540566] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.041 [2024-12-05 09:52:04.540578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.540585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.041 [2024-12-05 09:52:04.540592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:20:17.041 [2024-12-05 09:52:04.540597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.541587] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.041 [2024-12-05 09:52:04.551255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.551281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.041 [2024-12-05 09:52:04.551289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.668 ms 00:20:17.041 [2024-12-05 09:52:04.551296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.551360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.551369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.041 [2024-12-05 09:52:04.551375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:17.041 [2024-12-05 09:52:04.551381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.555768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.555791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.041 [2024-12-05 09:52:04.555798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.357 ms 00:20:17.041 [2024-12-05 09:52:04.555804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.555875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.555883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.041 [2024-12-05 09:52:04.555890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:17.041 [2024-12-05 09:52:04.555897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.555932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.555938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.041 [2024-12-05 09:52:04.555944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:17.041 [2024-12-05 09:52:04.555950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.555966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.041 [2024-12-05 09:52:04.558516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.558537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.041 [2024-12-05 09:52:04.558544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:20:17.041 [2024-12-05 09:52:04.558549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.558577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.558584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.041 [2024-12-05 09:52:04.558590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.041 [2024-12-05 09:52:04.558596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.558611] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.041 [2024-12-05 09:52:04.558626] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.041 [2024-12-05 09:52:04.558652] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.041 [2024-12-05 09:52:04.558664] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.041 [2024-12-05 09:52:04.558743] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.041 [2024-12-05 09:52:04.558752] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.041 [2024-12-05 09:52:04.558760] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.041 [2024-12-05 09:52:04.558770] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.041 [2024-12-05 09:52:04.558776] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.041 [2024-12-05 09:52:04.558783] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.041 [2024-12-05 09:52:04.558789] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.041 [2024-12-05 09:52:04.558795] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.041 [2024-12-05 09:52:04.558801] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.041 [2024-12-05 09:52:04.558807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.558813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.041 [2024-12-05 09:52:04.558819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:20:17.041 [2024-12-05 09:52:04.558825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.558893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.041 [2024-12-05 09:52:04.558907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.041 [2024-12-05 09:52:04.558913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:17.041 [2024-12-05 09:52:04.558919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.041 [2024-12-05 09:52:04.558996] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.041 [2024-12-05 09:52:04.559004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.041 [2024-12-05 09:52:04.559011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.041 [2024-12-05 09:52:04.559017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.041 [2024-12-05 09:52:04.559023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.041 [2024-12-05 09:52:04.559029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.041 [2024-12-05 09:52:04.559034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.041 [2024-12-05 09:52:04.559040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.041 [2024-12-05 09:52:04.559045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.041 [2024-12-05 09:52:04.559051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.041 [2024-12-05 09:52:04.559057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.041 [2024-12-05 09:52:04.559068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.041 [2024-12-05 09:52:04.559073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.041 [2024-12-05 09:52:04.559078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.041 [2024-12-05 09:52:04.559083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.041 [2024-12-05 09:52:04.559089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.041 [2024-12-05 09:52:04.559094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.041 [2024-12-05 09:52:04.559099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.041 [2024-12-05 09:52:04.559104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.041 [2024-12-05 09:52:04.559110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.041 [2024-12-05 09:52:04.559115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.041 [2024-12-05 09:52:04.559120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.041 [2024-12-05 09:52:04.559125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.041 [2024-12-05 09:52:04.559130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.041 [2024-12-05 09:52:04.559135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.041 [2024-12-05 09:52:04.559141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.041 [2024-12-05 09:52:04.559146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.041 [2024-12-05 09:52:04.559151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.042 [2024-12-05 09:52:04.559156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.042 [2024-12-05 09:52:04.559161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.042 [2024-12-05 09:52:04.559166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.042 [2024-12-05 09:52:04.559171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.042 [2024-12-05 09:52:04.559177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.042 [2024-12-05 09:52:04.559182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.042 [2024-12-05 09:52:04.559188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.042 [2024-12-05 09:52:04.559194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.042 [2024-12-05 09:52:04.559200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.042 [2024-12-05 09:52:04.559205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.042 [2024-12-05 09:52:04.559210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.042 [2024-12-05 09:52:04.559214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.042 [2024-12-05 09:52:04.559220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.042 [2024-12-05 09:52:04.559225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.042 [2024-12-05 09:52:04.559230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.042 [2024-12-05 09:52:04.559235] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.042 [2024-12-05 09:52:04.559241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.042 [2024-12-05 09:52:04.559249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.042 [2024-12-05 09:52:04.559257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.042 [2024-12-05 09:52:04.559263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.042 [2024-12-05 09:52:04.559268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.042 [2024-12-05 09:52:04.559273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.042 [2024-12-05 09:52:04.559278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.042 [2024-12-05 09:52:04.559283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.042 [2024-12-05 09:52:04.559288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.042 [2024-12-05 09:52:04.559295] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.042 [2024-12-05 09:52:04.559301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.042 [2024-12-05 09:52:04.559307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.042 [2024-12-05 09:52:04.559313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.042 [2024-12-05 09:52:04.559319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.042 [2024-12-05 09:52:04.559325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.042 [2024-12-05 09:52:04.559330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.042 [2024-12-05 09:52:04.559338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.042 [2024-12-05 09:52:04.559344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.042 [2024-12-05 09:52:04.559349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.042 [2024-12-05 09:52:04.559355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.042 [2024-12-05 09:52:04.559361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.042 [2024-12-05 09:52:04.559367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.042 [2024-12-05 09:52:04.559372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.042 [2024-12-05 09:52:04.559378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.042 [2024-12-05 09:52:04.559384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.042 [2024-12-05 09:52:04.559390] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.042 [2024-12-05 09:52:04.559396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.042 [2024-12-05 09:52:04.559402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.042 [2024-12-05 09:52:04.559407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.042 [2024-12-05 09:52:04.559412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.042 [2024-12-05 09:52:04.559418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.042 [2024-12-05 09:52:04.559424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.559431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.042 [2024-12-05 09:52:04.559437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:20:17.042 [2024-12-05 09:52:04.559442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.580284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.580313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.042 [2024-12-05 09:52:04.580322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.801 ms 00:20:17.042 [2024-12-05 09:52:04.580329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.580423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.580431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.042 [2024-12-05 09:52:04.580438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:17.042 [2024-12-05 09:52:04.580445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.619515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.619545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.042 [2024-12-05 09:52:04.619556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.048 ms 00:20:17.042 [2024-12-05 09:52:04.619563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.619621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.619629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.042 [2024-12-05 09:52:04.619636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.042 [2024-12-05 09:52:04.619641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.619943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.619956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.042 [2024-12-05 09:52:04.619963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:20:17.042 [2024-12-05 09:52:04.619972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.620073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.620081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.042 [2024-12-05 09:52:04.620087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:17.042 [2024-12-05 09:52:04.620093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.630863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.630983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.042 [2024-12-05 09:52:04.630996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.754 ms 00:20:17.042 [2024-12-05 09:52:04.631002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.641192] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:17.042 [2024-12-05 09:52:04.641220] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.042 [2024-12-05 09:52:04.641229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.641235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.042 [2024-12-05 09:52:04.641242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.139 ms 00:20:17.042 [2024-12-05 09:52:04.641248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.042 [2024-12-05 09:52:04.659857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.042 [2024-12-05 09:52:04.659885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.042 [2024-12-05 09:52:04.659894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.553 ms 00:20:17.042 [2024-12-05 09:52:04.659901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.669233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.669334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.303 [2024-12-05 09:52:04.669346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.273 ms 00:20:17.303 [2024-12-05 09:52:04.669352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.678423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.678447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.303 [2024-12-05 09:52:04.678455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.030 ms 00:20:17.303 [2024-12-05 09:52:04.678461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.678935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.678956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.303 [2024-12-05 09:52:04.678964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:20:17.303 [2024-12-05 09:52:04.678971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.723371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.723414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.303 [2024-12-05 09:52:04.723424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.383 ms 00:20:17.303 [2024-12-05 09:52:04.723431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.731255] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.303 [2024-12-05 09:52:04.742946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.742974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.303 [2024-12-05 09:52:04.742984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.451 ms 00:20:17.303 [2024-12-05 09:52:04.742993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.743064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.743073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.303 [2024-12-05 09:52:04.743079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:17.303 [2024-12-05 09:52:04.743085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.743120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.743127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.303 [2024-12-05 09:52:04.743133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:17.303 [2024-12-05 09:52:04.743142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.743167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.743174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.303 [2024-12-05 09:52:04.743180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.303 [2024-12-05 09:52:04.743186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.743210] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.303 [2024-12-05 09:52:04.743218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.743224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.303 [2024-12-05 09:52:04.743230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.303 [2024-12-05 09:52:04.743236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.761186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.761212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.303 [2024-12-05 09:52:04.761221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.936 ms 00:20:17.303 [2024-12-05 09:52:04.761228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.761295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.761303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.303 [2024-12-05 09:52:04.761310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:17.303 [2024-12-05 09:52:04.761316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.762235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.303 [2024-12-05 09:52:04.764588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.323 ms, result 0 00:20:17.303 [2024-12-05 09:52:04.765647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.303 [2024-12-05 09:52:04.776233] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.303  [2024-12-05T09:52:04.932Z] Copying: 4096/4096 [kB] (average 38 MBps)[2024-12-05 09:52:04.882242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.303 [2024-12-05 09:52:04.888884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.888978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.303 [2024-12-05 09:52:04.889027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:17.303 [2024-12-05 09:52:04.889045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.889073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.303 [2024-12-05 09:52:04.891338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.891419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.303 [2024-12-05 09:52:04.891459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:20:17.303 [2024-12-05 09:52:04.891476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.893272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.893379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.303 [2024-12-05 09:52:04.893426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.767 ms 00:20:17.303 [2024-12-05 09:52:04.893444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.896554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.896627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.303 [2024-12-05 09:52:04.896668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.082 ms 00:20:17.303 [2024-12-05 09:52:04.896684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.901981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.902059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.303 [2024-12-05 09:52:04.902101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.268 ms 00:20:17.303 [2024-12-05 09:52:04.902118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.303 [2024-12-05 09:52:04.919230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.303 [2024-12-05 09:52:04.919317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.303 [2024-12-05 09:52:04.919355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.057 ms 00:20:17.303 [2024-12-05 09:52:04.919372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.567 [2024-12-05 09:52:04.930769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.567 [2024-12-05 09:52:04.930867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.567 [2024-12-05 09:52:04.930905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.365 ms 00:20:17.567 [2024-12-05 09:52:04.930921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.567 [2024-12-05 09:52:04.931020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.567 [2024-12-05 09:52:04.931040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.567 [2024-12-05 09:52:04.931061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:17.567 [2024-12-05 09:52:04.931075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.567 [2024-12-05 09:52:04.948735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.567 [2024-12-05 09:52:04.948821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:17.567 [2024-12-05 09:52:04.948858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.639 ms 00:20:17.567 [2024-12-05 09:52:04.948874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.567 [2024-12-05 09:52:04.966109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.567 [2024-12-05 09:52:04.966189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:17.567 [2024-12-05 09:52:04.966225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.181 ms 00:20:17.567 [2024-12-05 09:52:04.966241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.567 [2024-12-05 09:52:04.983945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.567 [2024-12-05 09:52:04.984031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.567 [2024-12-05 09:52:04.984072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.672 ms 00:20:17.567 [2024-12-05 09:52:04.984089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.567 [2024-12-05 09:52:05.001919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.567 [2024-12-05 09:52:05.002026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.567 [2024-12-05 09:52:05.002073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.607 ms 00:20:17.567 [2024-12-05 09:52:05.002091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.567 [2024-12-05 09:52:05.002124] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.567 [2024-12-05 09:52:05.002396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.002972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.003951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.004004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.004045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.004094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.004120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.004142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.004165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.567 [2024-12-05 09:52:05.004236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.568 [2024-12-05 09:52:05.004601] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.568 [2024-12-05 09:52:05.004608] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7839b8c0-c26e-4e3f-8cc1-f17ca740b133 00:20:17.568 [2024-12-05 09:52:05.004614] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.568 [2024-12-05 09:52:05.004620] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.568 [2024-12-05 09:52:05.004625] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.568 [2024-12-05 09:52:05.004631] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.568 [2024-12-05 09:52:05.004637] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.568 [2024-12-05 09:52:05.004642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.568 [2024-12-05 09:52:05.004650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.568 [2024-12-05 09:52:05.004655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.568 [2024-12-05 09:52:05.004660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.568 [2024-12-05 09:52:05.004667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.568 [2024-12-05 09:52:05.004673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.568 [2024-12-05 09:52:05.004680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:20:17.568 [2024-12-05 09:52:05.004686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.013770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.568 [2024-12-05 09:52:05.013862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.568 [2024-12-05 09:52:05.013873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.062 ms 00:20:17.568 [2024-12-05 09:52:05.013879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.014161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.568 [2024-12-05 09:52:05.014178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.568 [2024-12-05 09:52:05.014185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:17.568 [2024-12-05 09:52:05.014190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.042051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.568 [2024-12-05 09:52:05.042077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.568 [2024-12-05 09:52:05.042085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.568 [2024-12-05 09:52:05.042094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.042148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.568 [2024-12-05 09:52:05.042155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.568 [2024-12-05 09:52:05.042161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.568 [2024-12-05 09:52:05.042166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.042196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.568 [2024-12-05 09:52:05.042203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.568 [2024-12-05 09:52:05.042209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.568 [2024-12-05 09:52:05.042215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.042230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.568 [2024-12-05 09:52:05.042236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.568 [2024-12-05 09:52:05.042242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.568 [2024-12-05 09:52:05.042247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.102163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.568 [2024-12-05 09:52:05.102192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.568 [2024-12-05 09:52:05.102201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.568 [2024-12-05 09:52:05.102211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.151117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.568 [2024-12-05 09:52:05.151146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.568 [2024-12-05 09:52:05.151155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.568 [2024-12-05 09:52:05.151162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.151198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.568 [2024-12-05 09:52:05.151205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.568 [2024-12-05 09:52:05.151211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.568 [2024-12-05 09:52:05.151217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.568 [2024-12-05 09:52:05.151239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.568 [2024-12-05 09:52:05.151249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.569 [2024-12-05 09:52:05.151255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.569 [2024-12-05 09:52:05.151261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.569 [2024-12-05 09:52:05.151327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.569 [2024-12-05 09:52:05.151335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.569 [2024-12-05 09:52:05.151341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.569 [2024-12-05 09:52:05.151346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.569 [2024-12-05 09:52:05.151370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.569 [2024-12-05 09:52:05.151377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.569 [2024-12-05 09:52:05.151385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.569 [2024-12-05 09:52:05.151391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.569 [2024-12-05 09:52:05.151420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.569 [2024-12-05 09:52:05.151428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.569 [2024-12-05 09:52:05.151435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.569 [2024-12-05 09:52:05.151440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.569 [2024-12-05 09:52:05.151474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.569 [2024-12-05 09:52:05.151483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.569 [2024-12-05 09:52:05.151490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.569 [2024-12-05 09:52:05.151496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.569 [2024-12-05 09:52:05.151616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.721 ms, result 0 00:20:18.140 00:20:18.140 00:20:18.140 09:52:05 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76875 00:20:18.140 09:52:05 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76875 00:20:18.140 09:52:05 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:18.140 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76875 ']' 00:20:18.140 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.140 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.140 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.140 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.140 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:18.402 [2024-12-05 09:52:05.791987] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:18.402 [2024-12-05 09:52:05.792294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76875 ] 00:20:18.402 [2024-12-05 09:52:05.949298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.402 [2024-12-05 09:52:06.026937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.345 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.345 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:19.345 09:52:06 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:19.345 [2024-12-05 09:52:06.815412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.345 [2024-12-05 09:52:06.815459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.609 [2024-12-05 09:52:06.983444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:06.983479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.609 [2024-12-05 09:52:06.983491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.609 [2024-12-05 09:52:06.983497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:06.985589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:06.985616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.609 [2024-12-05 09:52:06.985624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:20:19.609 [2024-12-05 09:52:06.985630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:06.985689] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.609 [2024-12-05 09:52:06.986243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.609 [2024-12-05 09:52:06.986261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:06.986268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.609 [2024-12-05 09:52:06.986276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:20:19.609 [2024-12-05 09:52:06.986282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:06.987256] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.609 [2024-12-05 09:52:06.997192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:06.997322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.609 [2024-12-05 09:52:06.997335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.939 ms 00:20:19.609 [2024-12-05 09:52:06.997342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:06.997410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:06.997421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.609 [2024-12-05 09:52:06.997427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:19.609 [2024-12-05 09:52:06.997434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:07.001772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:07.001800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.609 [2024-12-05 09:52:07.001807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:20:19.609 [2024-12-05 09:52:07.001814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:07.001888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:07.001896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.609 [2024-12-05 09:52:07.001902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:19.609 [2024-12-05 09:52:07.001915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:07.001934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:07.001942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.609 [2024-12-05 09:52:07.001948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.609 [2024-12-05 09:52:07.001955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:07.001972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:19.609 [2024-12-05 09:52:07.004654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:07.004676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.609 [2024-12-05 09:52:07.004685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:20:19.609 [2024-12-05 09:52:07.004691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:07.004720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:07.004726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.609 [2024-12-05 09:52:07.004734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.609 [2024-12-05 09:52:07.004741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:07.004757] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.609 [2024-12-05 09:52:07.004772] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.609 [2024-12-05 09:52:07.004804] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.609 [2024-12-05 09:52:07.004815] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.609 [2024-12-05 09:52:07.004896] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.609 [2024-12-05 09:52:07.004904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.609 [2024-12-05 09:52:07.004916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.609 [2024-12-05 09:52:07.004924] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.609 [2024-12-05 09:52:07.004932] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.609 [2024-12-05 09:52:07.004938] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:19.609 [2024-12-05 09:52:07.004945] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.609 [2024-12-05 09:52:07.004950] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.609 [2024-12-05 09:52:07.004958] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.609 [2024-12-05 09:52:07.004964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:07.004971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.609 [2024-12-05 09:52:07.004978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:19.609 [2024-12-05 09:52:07.004985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:07.005051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.609 [2024-12-05 09:52:07.005060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.609 [2024-12-05 09:52:07.005066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:19.609 [2024-12-05 09:52:07.005073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.609 [2024-12-05 09:52:07.005148] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.609 [2024-12-05 09:52:07.005157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.609 [2024-12-05 09:52:07.005163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.609 [2024-12-05 09:52:07.005170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.609 [2024-12-05 09:52:07.005176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.609 [2024-12-05 09:52:07.005185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.609 [2024-12-05 09:52:07.005191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:19.609 [2024-12-05 09:52:07.005199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.609 [2024-12-05 09:52:07.005204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:19.609 [2024-12-05 09:52:07.005211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.609 [2024-12-05 09:52:07.005216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.609 [2024-12-05 09:52:07.005224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:19.610 [2024-12-05 09:52:07.005229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.610 [2024-12-05 09:52:07.005236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.610 [2024-12-05 09:52:07.005242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:19.610 [2024-12-05 09:52:07.005248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.610 [2024-12-05 09:52:07.005261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:19.610 [2024-12-05 09:52:07.005270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.610 [2024-12-05 09:52:07.005281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.610 [2024-12-05 09:52:07.005293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.610 [2024-12-05 09:52:07.005300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.610 [2024-12-05 09:52:07.005312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.610 [2024-12-05 09:52:07.005317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.610 [2024-12-05 09:52:07.005329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.610 [2024-12-05 09:52:07.005336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.610 [2024-12-05 09:52:07.005348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.610 [2024-12-05 09:52:07.005353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.610 [2024-12-05 09:52:07.005365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.610 [2024-12-05 09:52:07.005371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:19.610 [2024-12-05 09:52:07.005376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.610 [2024-12-05 09:52:07.005382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.610 [2024-12-05 09:52:07.005387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:19.610 [2024-12-05 09:52:07.005395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.610 [2024-12-05 09:52:07.005407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:19.610 [2024-12-05 09:52:07.005413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005419] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.610 [2024-12-05 09:52:07.005426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.610 [2024-12-05 09:52:07.005433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.610 [2024-12-05 09:52:07.005439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.610 [2024-12-05 09:52:07.005446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.610 [2024-12-05 09:52:07.005452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.610 [2024-12-05 09:52:07.005458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.610 [2024-12-05 09:52:07.005463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.610 [2024-12-05 09:52:07.005469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.610 [2024-12-05 09:52:07.005475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.610 [2024-12-05 09:52:07.005486] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.610 [2024-12-05 09:52:07.005494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.610 [2024-12-05 09:52:07.005504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:19.610 [2024-12-05 09:52:07.005523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:19.610 [2024-12-05 09:52:07.005530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:19.610 [2024-12-05 09:52:07.005536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:19.610 [2024-12-05 09:52:07.005543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:19.610 [2024-12-05 09:52:07.005549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:19.610 [2024-12-05 09:52:07.005556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:19.610 [2024-12-05 09:52:07.005561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:19.610 [2024-12-05 09:52:07.005568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:19.610 [2024-12-05 09:52:07.005574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:19.610 [2024-12-05 09:52:07.005581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:19.610 [2024-12-05 09:52:07.005586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:19.610 [2024-12-05 09:52:07.005593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:19.610 [2024-12-05 09:52:07.005599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:19.610 [2024-12-05 09:52:07.005606] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.610 [2024-12-05 09:52:07.005612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.610 [2024-12-05 09:52:07.005622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.610 [2024-12-05 09:52:07.005628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.610 [2024-12-05 09:52:07.005635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.610 [2024-12-05 09:52:07.005640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.610 [2024-12-05 09:52:07.005648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.005653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.610 [2024-12-05 09:52:07.005663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:20:19.610 [2024-12-05 09:52:07.005671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.610 [2024-12-05 09:52:07.026274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.026303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.610 [2024-12-05 09:52:07.026313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.558 ms 00:20:19.610 [2024-12-05 09:52:07.026322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.610 [2024-12-05 09:52:07.026413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.026421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.610 [2024-12-05 09:52:07.026429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:19.610 [2024-12-05 09:52:07.026435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.610 [2024-12-05 09:52:07.050298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.050324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.610 [2024-12-05 09:52:07.050333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.845 ms 00:20:19.610 [2024-12-05 09:52:07.050339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.610 [2024-12-05 09:52:07.050384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.050391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.610 [2024-12-05 09:52:07.050399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:19.610 [2024-12-05 09:52:07.050404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.610 [2024-12-05 09:52:07.050712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.050723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.610 [2024-12-05 09:52:07.050734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:19.610 [2024-12-05 09:52:07.050740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.610 [2024-12-05 09:52:07.050840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.050847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.610 [2024-12-05 09:52:07.050855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:19.610 [2024-12-05 09:52:07.050861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.610 [2024-12-05 09:52:07.062419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.062443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.610 [2024-12-05 09:52:07.062452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.540 ms 00:20:19.610 [2024-12-05 09:52:07.062457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.610 [2024-12-05 09:52:07.089694] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:19.610 [2024-12-05 09:52:07.089724] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.610 [2024-12-05 09:52:07.089737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.610 [2024-12-05 09:52:07.089744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.611 [2024-12-05 09:52:07.089752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.192 ms 00:20:19.611 [2024-12-05 09:52:07.089762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.108475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.108503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.611 [2024-12-05 09:52:07.108523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.654 ms 00:20:19.611 [2024-12-05 09:52:07.108530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.117210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.117243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.611 [2024-12-05 09:52:07.117254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.622 ms 00:20:19.611 [2024-12-05 09:52:07.117260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.126217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.126241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.611 [2024-12-05 09:52:07.126249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.915 ms 00:20:19.611 [2024-12-05 09:52:07.126255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.126728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.126739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.611 [2024-12-05 09:52:07.126749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:20:19.611 [2024-12-05 09:52:07.126755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.170647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.170790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.611 [2024-12-05 09:52:07.170810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.873 ms 00:20:19.611 [2024-12-05 09:52:07.170817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.178972] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:19.611 [2024-12-05 09:52:07.190262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.190392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.611 [2024-12-05 09:52:07.190407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.394 ms 00:20:19.611 [2024-12-05 09:52:07.190415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.190484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.190494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.611 [2024-12-05 09:52:07.190501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:19.611 [2024-12-05 09:52:07.190522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.190560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.190568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.611 [2024-12-05 09:52:07.190575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:19.611 [2024-12-05 09:52:07.190584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.190602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.190610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.611 [2024-12-05 09:52:07.190616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:19.611 [2024-12-05 09:52:07.190624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.190649] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.611 [2024-12-05 09:52:07.190659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.190667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.611 [2024-12-05 09:52:07.190674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.611 [2024-12-05 09:52:07.190679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.208828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.208854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.611 [2024-12-05 09:52:07.208865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.130 ms 00:20:19.611 [2024-12-05 09:52:07.208872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.208939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.611 [2024-12-05 09:52:07.208947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.611 [2024-12-05 09:52:07.208955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:19.611 [2024-12-05 09:52:07.208962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.611 [2024-12-05 09:52:07.209568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.611 [2024-12-05 09:52:07.211968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.894 ms, result 0 00:20:19.611 [2024-12-05 09:52:07.213774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.611 Some configs were skipped because the RPC state that can call them passed over. 00:20:19.873 09:52:07 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:19.873 [2024-12-05 09:52:07.439301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.873 [2024-12-05 09:52:07.439414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:19.873 [2024-12-05 09:52:07.439428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.734 ms 00:20:19.873 [2024-12-05 09:52:07.439437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.873 [2024-12-05 09:52:07.439465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.898 ms, result 0 00:20:19.873 true 00:20:19.873 09:52:07 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:20.134 [2024-12-05 09:52:07.636539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.134 [2024-12-05 09:52:07.636573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:20.134 [2024-12-05 09:52:07.636583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:20:20.134 [2024-12-05 09:52:07.636590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.134 [2024-12-05 09:52:07.636617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.826 ms, result 0 00:20:20.134 true 00:20:20.134 09:52:07 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76875 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76875 ']' 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76875 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76875 00:20:20.134 killing process with pid 76875 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76875' 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76875 00:20:20.134 09:52:07 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76875 00:20:20.707 [2024-12-05 09:52:08.213786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.707 [2024-12-05 09:52:08.213832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.707 [2024-12-05 09:52:08.213842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.707 [2024-12-05 09:52:08.213849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.707 [2024-12-05 09:52:08.213868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:20.707 [2024-12-05 09:52:08.215946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.707 [2024-12-05 09:52:08.215970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.707 [2024-12-05 09:52:08.215982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:20:20.707 [2024-12-05 09:52:08.215988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.707 [2024-12-05 09:52:08.216211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.707 [2024-12-05 09:52:08.216219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.707 [2024-12-05 09:52:08.216227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:20:20.707 [2024-12-05 09:52:08.216233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.707 [2024-12-05 09:52:08.220191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.707 [2024-12-05 09:52:08.220217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.707 [2024-12-05 09:52:08.220227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.940 ms 00:20:20.707 [2024-12-05 09:52:08.220233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.225519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.708 [2024-12-05 09:52:08.225542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.708 [2024-12-05 09:52:08.225553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.254 ms 00:20:20.708 [2024-12-05 09:52:08.225559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.233859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.708 [2024-12-05 09:52:08.233887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.708 [2024-12-05 09:52:08.233898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.256 ms 00:20:20.708 [2024-12-05 09:52:08.233904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.240591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.708 [2024-12-05 09:52:08.240616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.708 [2024-12-05 09:52:08.240626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.654 ms 00:20:20.708 [2024-12-05 09:52:08.240633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.240741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.708 [2024-12-05 09:52:08.240749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.708 [2024-12-05 09:52:08.240757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:20.708 [2024-12-05 09:52:08.240763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.249236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.708 [2024-12-05 09:52:08.249259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:20.708 [2024-12-05 09:52:08.249268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.457 ms 00:20:20.708 [2024-12-05 09:52:08.249274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.257216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.708 [2024-12-05 09:52:08.257239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:20.708 [2024-12-05 09:52:08.257251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.911 ms 00:20:20.708 [2024-12-05 09:52:08.257256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.264135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.708 [2024-12-05 09:52:08.264275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.708 [2024-12-05 09:52:08.264289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.848 ms 00:20:20.708 [2024-12-05 09:52:08.264295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.271893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.708 [2024-12-05 09:52:08.271929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.708 [2024-12-05 09:52:08.271939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.540 ms 00:20:20.708 [2024-12-05 09:52:08.271944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.708 [2024-12-05 09:52:08.271972] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.708 [2024-12-05 09:52:08.271983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.271991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.271998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.708 [2024-12-05 09:52:08.272261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.709 [2024-12-05 09:52:08.272659] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.709 [2024-12-05 09:52:08.272670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7839b8c0-c26e-4e3f-8cc1-f17ca740b133 00:20:20.709 [2024-12-05 09:52:08.272679] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:20.709 [2024-12-05 09:52:08.272685] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:20.709 [2024-12-05 09:52:08.272690] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:20.709 [2024-12-05 09:52:08.272697] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:20.709 [2024-12-05 09:52:08.272702] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.709 [2024-12-05 09:52:08.272709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.709 [2024-12-05 09:52:08.272715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.709 [2024-12-05 09:52:08.272722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.709 [2024-12-05 09:52:08.272730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.709 [2024-12-05 09:52:08.272736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.709 [2024-12-05 09:52:08.272742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.709 [2024-12-05 09:52:08.272749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:20:20.709 [2024-12-05 09:52:08.272755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.709 [2024-12-05 09:52:08.282425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.709 [2024-12-05 09:52:08.282448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.710 [2024-12-05 09:52:08.282459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.648 ms 00:20:20.710 [2024-12-05 09:52:08.282465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.710 [2024-12-05 09:52:08.282784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.710 [2024-12-05 09:52:08.282794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.710 [2024-12-05 09:52:08.282805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:20:20.710 [2024-12-05 09:52:08.282810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.710 [2024-12-05 09:52:08.318009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.710 [2024-12-05 09:52:08.318136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.710 [2024-12-05 09:52:08.318151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.710 [2024-12-05 09:52:08.318158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.710 [2024-12-05 09:52:08.318239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.710 [2024-12-05 09:52:08.318246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.710 [2024-12-05 09:52:08.318256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.710 [2024-12-05 09:52:08.318261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.710 [2024-12-05 09:52:08.318301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.710 [2024-12-05 09:52:08.318309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.710 [2024-12-05 09:52:08.318318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.710 [2024-12-05 09:52:08.318324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.710 [2024-12-05 09:52:08.318338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.710 [2024-12-05 09:52:08.318345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.710 [2024-12-05 09:52:08.318352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.710 [2024-12-05 09:52:08.318359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.971 [2024-12-05 09:52:08.378108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.971 [2024-12-05 09:52:08.378136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.971 [2024-12-05 09:52:08.378146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.971 [2024-12-05 09:52:08.378153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.971 [2024-12-05 09:52:08.426258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.971 [2024-12-05 09:52:08.426288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.971 [2024-12-05 09:52:08.426298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.971 [2024-12-05 09:52:08.426306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.971 [2024-12-05 09:52:08.426364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.971 [2024-12-05 09:52:08.426372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.971 [2024-12-05 09:52:08.426381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.971 [2024-12-05 09:52:08.426387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.971 [2024-12-05 09:52:08.426411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.971 [2024-12-05 09:52:08.426418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.971 [2024-12-05 09:52:08.426425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.971 [2024-12-05 09:52:08.426431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.971 [2024-12-05 09:52:08.426505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.971 [2024-12-05 09:52:08.426534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.972 [2024-12-05 09:52:08.426543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.972 [2024-12-05 09:52:08.426549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.972 [2024-12-05 09:52:08.426576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.972 [2024-12-05 09:52:08.426583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.972 [2024-12-05 09:52:08.426590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.972 [2024-12-05 09:52:08.426596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.972 [2024-12-05 09:52:08.426627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.972 [2024-12-05 09:52:08.426634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.972 [2024-12-05 09:52:08.426643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.972 [2024-12-05 09:52:08.426649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.972 [2024-12-05 09:52:08.426685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.972 [2024-12-05 09:52:08.426693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.972 [2024-12-05 09:52:08.426700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.972 [2024-12-05 09:52:08.426705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.972 [2024-12-05 09:52:08.426811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 213.007 ms, result 0 00:20:21.546 09:52:08 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:21.546 [2024-12-05 09:52:09.024630] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:21.546 [2024-12-05 09:52:09.024781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76929 ] 00:20:21.807 [2024-12-05 09:52:09.182595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.807 [2024-12-05 09:52:09.269239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.070 [2024-12-05 09:52:09.480429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.070 [2024-12-05 09:52:09.480605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.070 [2024-12-05 09:52:09.632685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.632719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:22.070 [2024-12-05 09:52:09.632729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:22.070 [2024-12-05 09:52:09.632735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.070 [2024-12-05 09:52:09.634812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.634936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.070 [2024-12-05 09:52:09.634949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:20:22.070 [2024-12-05 09:52:09.634956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.070 [2024-12-05 09:52:09.635009] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:22.070 [2024-12-05 09:52:09.635536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:22.070 [2024-12-05 09:52:09.635553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.635559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.070 [2024-12-05 09:52:09.635567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:20:22.070 [2024-12-05 09:52:09.635573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.070 [2024-12-05 09:52:09.636560] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:22.070 [2024-12-05 09:52:09.646482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.646602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:22.070 [2024-12-05 09:52:09.646616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.923 ms 00:20:22.070 [2024-12-05 09:52:09.646622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.070 [2024-12-05 09:52:09.646693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.646702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:22.070 [2024-12-05 09:52:09.646709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:22.070 [2024-12-05 09:52:09.646715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.070 [2024-12-05 09:52:09.651249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.651273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.070 [2024-12-05 09:52:09.651280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:20:22.070 [2024-12-05 09:52:09.651286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.070 [2024-12-05 09:52:09.651357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.651364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.070 [2024-12-05 09:52:09.651371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:22.070 [2024-12-05 09:52:09.651376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.070 [2024-12-05 09:52:09.651394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.651401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:22.070 [2024-12-05 09:52:09.651407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:22.070 [2024-12-05 09:52:09.651413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.070 [2024-12-05 09:52:09.651431] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:22.070 [2024-12-05 09:52:09.654230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.070 [2024-12-05 09:52:09.654279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.070 [2024-12-05 09:52:09.654288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.803 ms 00:20:22.070 [2024-12-05 09:52:09.654294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.071 [2024-12-05 09:52:09.654325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.071 [2024-12-05 09:52:09.654331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:22.071 [2024-12-05 09:52:09.654337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:22.071 [2024-12-05 09:52:09.654342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.071 [2024-12-05 09:52:09.654357] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:22.071 [2024-12-05 09:52:09.654371] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:22.071 [2024-12-05 09:52:09.654397] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:22.071 [2024-12-05 09:52:09.654408] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:22.071 [2024-12-05 09:52:09.654487] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:22.071 [2024-12-05 09:52:09.654495] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:22.071 [2024-12-05 09:52:09.654503] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:22.071 [2024-12-05 09:52:09.654524] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654531] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654537] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:22.071 [2024-12-05 09:52:09.654544] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:22.071 [2024-12-05 09:52:09.654550] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:22.071 [2024-12-05 09:52:09.654556] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:22.071 [2024-12-05 09:52:09.654562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.071 [2024-12-05 09:52:09.654568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:22.071 [2024-12-05 09:52:09.654574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:20:22.071 [2024-12-05 09:52:09.654579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.071 [2024-12-05 09:52:09.654647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.071 [2024-12-05 09:52:09.654656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:22.071 [2024-12-05 09:52:09.654662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:22.071 [2024-12-05 09:52:09.654667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.071 [2024-12-05 09:52:09.654741] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:22.071 [2024-12-05 09:52:09.654749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:22.071 [2024-12-05 09:52:09.654755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:22.071 [2024-12-05 09:52:09.654774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:22.071 [2024-12-05 09:52:09.654791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.071 [2024-12-05 09:52:09.654803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:22.071 [2024-12-05 09:52:09.654813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:22.071 [2024-12-05 09:52:09.654820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.071 [2024-12-05 09:52:09.654826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:22.071 [2024-12-05 09:52:09.654831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:22.071 [2024-12-05 09:52:09.654836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:22.071 [2024-12-05 09:52:09.654846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:22.071 [2024-12-05 09:52:09.654862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:22.071 [2024-12-05 09:52:09.654877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:22.071 [2024-12-05 09:52:09.654892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:22.071 [2024-12-05 09:52:09.654909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.071 [2024-12-05 09:52:09.654919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:22.071 [2024-12-05 09:52:09.654924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.071 [2024-12-05 09:52:09.654934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:22.071 [2024-12-05 09:52:09.654940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:22.071 [2024-12-05 09:52:09.654946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.071 [2024-12-05 09:52:09.654951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:22.071 [2024-12-05 09:52:09.654956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:22.071 [2024-12-05 09:52:09.654961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:22.071 [2024-12-05 09:52:09.654971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:22.071 [2024-12-05 09:52:09.654976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.071 [2024-12-05 09:52:09.654983] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:22.071 [2024-12-05 09:52:09.654989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:22.071 [2024-12-05 09:52:09.654996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.071 [2024-12-05 09:52:09.655002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.071 [2024-12-05 09:52:09.655008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:22.071 [2024-12-05 09:52:09.655013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:22.071 [2024-12-05 09:52:09.655019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:22.071 [2024-12-05 09:52:09.655024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:22.071 [2024-12-05 09:52:09.655029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:22.071 [2024-12-05 09:52:09.655034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:22.071 [2024-12-05 09:52:09.655040] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:22.071 [2024-12-05 09:52:09.655047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.071 [2024-12-05 09:52:09.655053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:22.071 [2024-12-05 09:52:09.655059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:22.071 [2024-12-05 09:52:09.655064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:22.071 [2024-12-05 09:52:09.655070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:22.071 [2024-12-05 09:52:09.655075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:22.071 [2024-12-05 09:52:09.655081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:22.071 [2024-12-05 09:52:09.655086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:22.071 [2024-12-05 09:52:09.655091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:22.071 [2024-12-05 09:52:09.655097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:22.071 [2024-12-05 09:52:09.655102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:22.071 [2024-12-05 09:52:09.655107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:22.071 [2024-12-05 09:52:09.655113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:22.071 [2024-12-05 09:52:09.655118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:22.071 [2024-12-05 09:52:09.655123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:22.071 [2024-12-05 09:52:09.655129] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:22.071 [2024-12-05 09:52:09.655136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.071 [2024-12-05 09:52:09.655142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:22.072 [2024-12-05 09:52:09.655147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:22.072 [2024-12-05 09:52:09.655153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:22.072 [2024-12-05 09:52:09.655158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:22.072 [2024-12-05 09:52:09.655164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.072 [2024-12-05 09:52:09.655173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:22.072 [2024-12-05 09:52:09.655178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:20:22.072 [2024-12-05 09:52:09.655184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.072 [2024-12-05 09:52:09.675883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.072 [2024-12-05 09:52:09.675915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.072 [2024-12-05 09:52:09.675923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.659 ms 00:20:22.072 [2024-12-05 09:52:09.675929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.072 [2024-12-05 09:52:09.676022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.072 [2024-12-05 09:52:09.676031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:22.072 [2024-12-05 09:52:09.676037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:22.072 [2024-12-05 09:52:09.676042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.334 [2024-12-05 09:52:09.720020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.334 [2024-12-05 09:52:09.720051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.334 [2024-12-05 09:52:09.720062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.960 ms 00:20:22.334 [2024-12-05 09:52:09.720069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.334 [2024-12-05 09:52:09.720128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.334 [2024-12-05 09:52:09.720137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.334 [2024-12-05 09:52:09.720144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.334 [2024-12-05 09:52:09.720150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.334 [2024-12-05 09:52:09.720438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.334 [2024-12-05 09:52:09.720451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.334 [2024-12-05 09:52:09.720458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:22.334 [2024-12-05 09:52:09.720467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.334 [2024-12-05 09:52:09.720590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.334 [2024-12-05 09:52:09.720598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.334 [2024-12-05 09:52:09.720605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:22.334 [2024-12-05 09:52:09.720611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.334 [2024-12-05 09:52:09.731348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.334 [2024-12-05 09:52:09.731462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:22.334 [2024-12-05 09:52:09.731474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.721 ms 00:20:22.335 [2024-12-05 09:52:09.731481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.741452] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:22.335 [2024-12-05 09:52:09.741479] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:22.335 [2024-12-05 09:52:09.741489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.741496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:22.335 [2024-12-05 09:52:09.741502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.906 ms 00:20:22.335 [2024-12-05 09:52:09.741518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.759770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.759797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:22.335 [2024-12-05 09:52:09.759806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.194 ms 00:20:22.335 [2024-12-05 09:52:09.759813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.768672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.768697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:22.335 [2024-12-05 09:52:09.768704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.804 ms 00:20:22.335 [2024-12-05 09:52:09.768709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.777531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.777631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.335 [2024-12-05 09:52:09.777644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.781 ms 00:20:22.335 [2024-12-05 09:52:09.777650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.778109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.778122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.335 [2024-12-05 09:52:09.778129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:20:22.335 [2024-12-05 09:52:09.778135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.822014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.822051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.335 [2024-12-05 09:52:09.822061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.862 ms 00:20:22.335 [2024-12-05 09:52:09.822067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.829857] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:22.335 [2024-12-05 09:52:09.841156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.841291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.335 [2024-12-05 09:52:09.841305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.028 ms 00:20:22.335 [2024-12-05 09:52:09.841317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.841389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.841398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.335 [2024-12-05 09:52:09.841404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:22.335 [2024-12-05 09:52:09.841410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.841447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.841453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.335 [2024-12-05 09:52:09.841459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:22.335 [2024-12-05 09:52:09.841468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.841492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.841500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.335 [2024-12-05 09:52:09.841506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:22.335 [2024-12-05 09:52:09.841526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.841551] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.335 [2024-12-05 09:52:09.841558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.841564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.335 [2024-12-05 09:52:09.841570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:22.335 [2024-12-05 09:52:09.841576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.859317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.859342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.335 [2024-12-05 09:52:09.859350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.723 ms 00:20:22.335 [2024-12-05 09:52:09.859356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.859422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.335 [2024-12-05 09:52:09.859430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.335 [2024-12-05 09:52:09.859437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:22.335 [2024-12-05 09:52:09.859442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.335 [2024-12-05 09:52:09.860101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.335 [2024-12-05 09:52:09.862309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.191 ms, result 0 00:20:22.335 [2024-12-05 09:52:09.862955] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:22.335 [2024-12-05 09:52:09.877715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.726  [2024-12-05T09:52:11.930Z] Copying: 25/256 [MB] (25 MBps) [2024-12-05T09:52:13.320Z] Copying: 41/256 [MB] (15 MBps) [2024-12-05T09:52:14.262Z] Copying: 51/256 [MB] (10 MBps) [2024-12-05T09:52:15.204Z] Copying: 61/256 [MB] (10 MBps) [2024-12-05T09:52:16.146Z] Copying: 75/256 [MB] (13 MBps) [2024-12-05T09:52:17.090Z] Copying: 90/256 [MB] (15 MBps) [2024-12-05T09:52:18.035Z] Copying: 101/256 [MB] (10 MBps) [2024-12-05T09:52:18.980Z] Copying: 117/256 [MB] (16 MBps) [2024-12-05T09:52:19.927Z] Copying: 134/256 [MB] (16 MBps) [2024-12-05T09:52:21.330Z] Copying: 148/256 [MB] (14 MBps) [2024-12-05T09:52:22.272Z] Copying: 165/256 [MB] (16 MBps) [2024-12-05T09:52:23.290Z] Copying: 182/256 [MB] (17 MBps) [2024-12-05T09:52:24.291Z] Copying: 195/256 [MB] (13 MBps) [2024-12-05T09:52:25.234Z] Copying: 208/256 [MB] (12 MBps) [2024-12-05T09:52:26.172Z] Copying: 226/256 [MB] (18 MBps) [2024-12-05T09:52:26.741Z] Copying: 241/256 [MB] (14 MBps) [2024-12-05T09:52:27.311Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-05 09:52:27.049375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.682 [2024-12-05 09:52:27.060629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.060683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.682 [2024-12-05 09:52:27.060712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.682 [2024-12-05 09:52:27.060722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.060752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:39.682 [2024-12-05 09:52:27.063650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.063697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.682 [2024-12-05 09:52:27.063711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.883 ms 00:20:39.682 [2024-12-05 09:52:27.063720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.064038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.064052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.682 [2024-12-05 09:52:27.064062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:39.682 [2024-12-05 09:52:27.064071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.067791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.067816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.682 [2024-12-05 09:52:27.067826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:20:39.682 [2024-12-05 09:52:27.067835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.075773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.075813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.682 [2024-12-05 09:52:27.075825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.916 ms 00:20:39.682 [2024-12-05 09:52:27.075834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.102633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.102761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.682 [2024-12-05 09:52:27.102780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.718 ms 00:20:39.682 [2024-12-05 09:52:27.102789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.119266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.119315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.682 [2024-12-05 09:52:27.119334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.442 ms 00:20:39.682 [2024-12-05 09:52:27.119343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.119542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.119557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.682 [2024-12-05 09:52:27.119584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:39.682 [2024-12-05 09:52:27.119593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.145524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.145573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.682 [2024-12-05 09:52:27.145585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.909 ms 00:20:39.682 [2024-12-05 09:52:27.145592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.171437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.171484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.682 [2024-12-05 09:52:27.171498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.813 ms 00:20:39.682 [2024-12-05 09:52:27.171505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.197045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.197094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.682 [2024-12-05 09:52:27.197107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.477 ms 00:20:39.682 [2024-12-05 09:52:27.197115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.222121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.222168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.682 [2024-12-05 09:52:27.222180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:20:39.682 [2024-12-05 09:52:27.222188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.222217] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.682 [2024-12-05 09:52:27.222233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.222997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.682 [2024-12-05 09:52:27.223122] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.682 [2024-12-05 09:52:27.223131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7839b8c0-c26e-4e3f-8cc1-f17ca740b133 00:20:39.682 [2024-12-05 09:52:27.223140] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.682 [2024-12-05 09:52:27.223148] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.682 [2024-12-05 09:52:27.223157] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.682 [2024-12-05 09:52:27.223166] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.682 [2024-12-05 09:52:27.223175] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.682 [2024-12-05 09:52:27.223183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.682 [2024-12-05 09:52:27.223193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.682 [2024-12-05 09:52:27.223200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.682 [2024-12-05 09:52:27.223207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.682 [2024-12-05 09:52:27.223214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.223222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.682 [2024-12-05 09:52:27.223233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:20:39.682 [2024-12-05 09:52:27.223241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.236654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.236845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.682 [2024-12-05 09:52:27.236866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.378 ms 00:20:39.682 [2024-12-05 09:52:27.236875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.682 [2024-12-05 09:52:27.237290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.682 [2024-12-05 09:52:27.237302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.682 [2024-12-05 09:52:27.237313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:20:39.682 [2024-12-05 09:52:27.237322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.683 [2024-12-05 09:52:27.276430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.683 [2024-12-05 09:52:27.276649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.683 [2024-12-05 09:52:27.276671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.683 [2024-12-05 09:52:27.276689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.683 [2024-12-05 09:52:27.276795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.683 [2024-12-05 09:52:27.276807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.683 [2024-12-05 09:52:27.276818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.683 [2024-12-05 09:52:27.276826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.683 [2024-12-05 09:52:27.276881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.683 [2024-12-05 09:52:27.276892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.683 [2024-12-05 09:52:27.276903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.683 [2024-12-05 09:52:27.276911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.683 [2024-12-05 09:52:27.276934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.683 [2024-12-05 09:52:27.276943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.683 [2024-12-05 09:52:27.276951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.683 [2024-12-05 09:52:27.276959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.360920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.942 [2024-12-05 09:52:27.360985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.942 [2024-12-05 09:52:27.361000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.942 [2024-12-05 09:52:27.361009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.430248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.942 [2024-12-05 09:52:27.430303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.942 [2024-12-05 09:52:27.430316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.942 [2024-12-05 09:52:27.430326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.430402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.942 [2024-12-05 09:52:27.430412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.942 [2024-12-05 09:52:27.430421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.942 [2024-12-05 09:52:27.430430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.430464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.942 [2024-12-05 09:52:27.430481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.942 [2024-12-05 09:52:27.430490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.942 [2024-12-05 09:52:27.430499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.430639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.942 [2024-12-05 09:52:27.430653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.942 [2024-12-05 09:52:27.430663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.942 [2024-12-05 09:52:27.430674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.430707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.942 [2024-12-05 09:52:27.430718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.942 [2024-12-05 09:52:27.430730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.942 [2024-12-05 09:52:27.430739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.430783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.942 [2024-12-05 09:52:27.430794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.942 [2024-12-05 09:52:27.430803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.942 [2024-12-05 09:52:27.430811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.430859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.942 [2024-12-05 09:52:27.430875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.942 [2024-12-05 09:52:27.430884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.942 [2024-12-05 09:52:27.430893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.942 [2024-12-05 09:52:27.431050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.422 ms, result 0 00:20:40.883 00:20:40.883 00:20:40.883 09:52:28 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:41.143 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:41.143 09:52:28 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:41.143 09:52:28 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:41.143 09:52:28 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:41.143 09:52:28 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:41.143 09:52:28 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:41.143 09:52:28 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:41.143 09:52:28 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76875 00:20:41.143 Process with pid 76875 is not found 00:20:41.143 09:52:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76875 ']' 00:20:41.143 09:52:28 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76875 00:20:41.143 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76875) - No such process 00:20:41.143 09:52:28 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76875 is not found' 00:20:41.143 00:20:41.143 real 1m12.242s 00:20:41.143 user 1m37.369s 00:20:41.143 sys 0m5.327s 00:20:41.143 09:52:28 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.143 ************************************ 00:20:41.143 END TEST ftl_trim 00:20:41.143 ************************************ 00:20:41.143 09:52:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:41.143 09:52:28 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:41.143 09:52:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:41.143 09:52:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.143 09:52:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:41.143 ************************************ 00:20:41.143 START TEST ftl_restore 00:20:41.143 ************************************ 00:20:41.143 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:41.405 * Looking for test storage... 00:20:41.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.405 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:41.405 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:41.405 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:41.405 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.405 09:52:28 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:41.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.406 --rc genhtml_branch_coverage=1 00:20:41.406 --rc genhtml_function_coverage=1 00:20:41.406 --rc genhtml_legend=1 00:20:41.406 --rc geninfo_all_blocks=1 00:20:41.406 --rc geninfo_unexecuted_blocks=1 00:20:41.406 00:20:41.406 ' 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:41.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.406 --rc genhtml_branch_coverage=1 00:20:41.406 --rc genhtml_function_coverage=1 00:20:41.406 --rc genhtml_legend=1 00:20:41.406 --rc geninfo_all_blocks=1 00:20:41.406 --rc geninfo_unexecuted_blocks=1 00:20:41.406 00:20:41.406 ' 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:41.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.406 --rc genhtml_branch_coverage=1 00:20:41.406 --rc genhtml_function_coverage=1 00:20:41.406 --rc genhtml_legend=1 00:20:41.406 --rc geninfo_all_blocks=1 00:20:41.406 --rc geninfo_unexecuted_blocks=1 00:20:41.406 00:20:41.406 ' 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:41.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.406 --rc genhtml_branch_coverage=1 00:20:41.406 --rc genhtml_function_coverage=1 00:20:41.406 --rc genhtml_legend=1 00:20:41.406 --rc geninfo_all_blocks=1 00:20:41.406 --rc geninfo_unexecuted_blocks=1 00:20:41.406 00:20:41.406 ' 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:41.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.kEuaJYPrvF 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77197 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77197 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77197 ']' 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.406 09:52:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:41.406 09:52:28 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.406 [2024-12-05 09:52:29.016409] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:41.406 [2024-12-05 09:52:29.016744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77197 ] 00:20:41.667 [2024-12-05 09:52:29.181398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.928 [2024-12-05 09:52:29.302357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.501 09:52:29 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.501 09:52:29 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:42.501 09:52:29 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:42.501 09:52:29 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:42.501 09:52:29 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:42.501 09:52:29 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:42.501 09:52:29 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:42.501 09:52:29 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:42.762 09:52:30 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:42.762 09:52:30 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:42.762 09:52:30 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:42.762 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:42.762 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:42.762 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:42.762 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:42.762 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:43.024 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:43.024 { 00:20:43.024 "name": "nvme0n1", 00:20:43.024 "aliases": [ 00:20:43.024 "8d20adbb-f907-470b-a2dc-f8d34b1cb909" 00:20:43.024 ], 00:20:43.024 "product_name": "NVMe disk", 00:20:43.024 "block_size": 4096, 00:20:43.024 "num_blocks": 1310720, 00:20:43.024 "uuid": "8d20adbb-f907-470b-a2dc-f8d34b1cb909", 00:20:43.024 "numa_id": -1, 00:20:43.024 "assigned_rate_limits": { 00:20:43.024 "rw_ios_per_sec": 0, 00:20:43.024 "rw_mbytes_per_sec": 0, 00:20:43.024 "r_mbytes_per_sec": 0, 00:20:43.024 "w_mbytes_per_sec": 0 00:20:43.024 }, 00:20:43.024 "claimed": true, 00:20:43.024 "claim_type": "read_many_write_one", 00:20:43.024 "zoned": false, 00:20:43.024 "supported_io_types": { 00:20:43.024 "read": true, 00:20:43.024 "write": true, 00:20:43.024 "unmap": true, 00:20:43.024 "flush": true, 00:20:43.024 "reset": true, 00:20:43.024 "nvme_admin": true, 00:20:43.024 "nvme_io": true, 00:20:43.024 "nvme_io_md": false, 00:20:43.024 "write_zeroes": true, 00:20:43.024 "zcopy": false, 00:20:43.024 "get_zone_info": false, 00:20:43.024 "zone_management": false, 00:20:43.024 "zone_append": false, 00:20:43.024 "compare": true, 00:20:43.024 "compare_and_write": false, 00:20:43.024 "abort": true, 00:20:43.024 "seek_hole": false, 00:20:43.024 "seek_data": false, 00:20:43.024 "copy": true, 00:20:43.024 "nvme_iov_md": false 00:20:43.024 }, 00:20:43.024 "driver_specific": { 00:20:43.024 "nvme": [ 00:20:43.024 { 00:20:43.024 "pci_address": "0000:00:11.0", 00:20:43.024 "trid": { 00:20:43.024 "trtype": "PCIe", 00:20:43.024 "traddr": "0000:00:11.0" 00:20:43.024 }, 00:20:43.024 "ctrlr_data": { 00:20:43.024 "cntlid": 0, 00:20:43.024 "vendor_id": "0x1b36", 00:20:43.024 "model_number": "QEMU NVMe Ctrl", 00:20:43.024 "serial_number": "12341", 00:20:43.024 "firmware_revision": "8.0.0", 00:20:43.024 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:43.024 "oacs": { 00:20:43.024 "security": 0, 00:20:43.024 "format": 1, 00:20:43.024 "firmware": 0, 00:20:43.024 "ns_manage": 1 00:20:43.024 }, 00:20:43.024 "multi_ctrlr": false, 00:20:43.024 "ana_reporting": false 00:20:43.024 }, 00:20:43.024 "vs": { 00:20:43.024 "nvme_version": "1.4" 00:20:43.024 }, 00:20:43.024 "ns_data": { 00:20:43.024 "id": 1, 00:20:43.024 "can_share": false 00:20:43.024 } 00:20:43.024 } 00:20:43.024 ], 00:20:43.024 "mp_policy": "active_passive" 00:20:43.024 } 00:20:43.024 } 00:20:43.024 ]' 00:20:43.024 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:43.024 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:43.024 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:43.024 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:43.024 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:43.024 09:52:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:43.024 09:52:30 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:43.024 09:52:30 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:43.024 09:52:30 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:43.024 09:52:30 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:43.024 09:52:30 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:43.286 09:52:30 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=6e7e50e2-e152-49c5-b944-60af194b7354 00:20:43.286 09:52:30 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:43.286 09:52:30 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e7e50e2-e152-49c5-b944-60af194b7354 00:20:43.547 09:52:31 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:43.808 09:52:31 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=8fbf820d-d1ef-4174-8b73-59d0b61e32ee 00:20:43.808 09:52:31 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8fbf820d-d1ef-4174-8b73-59d0b61e32ee 00:20:44.068 09:52:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.068 09:52:31 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:44.068 09:52:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.068 09:52:31 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:44.068 09:52:31 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:44.068 09:52:31 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.068 09:52:31 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:44.068 09:52:31 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.068 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.068 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:44.068 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:44.068 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:44.068 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.329 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:44.329 { 00:20:44.329 "name": "cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982", 00:20:44.329 "aliases": [ 00:20:44.329 "lvs/nvme0n1p0" 00:20:44.329 ], 00:20:44.329 "product_name": "Logical Volume", 00:20:44.329 "block_size": 4096, 00:20:44.329 "num_blocks": 26476544, 00:20:44.329 "uuid": "cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982", 00:20:44.329 "assigned_rate_limits": { 00:20:44.329 "rw_ios_per_sec": 0, 00:20:44.329 "rw_mbytes_per_sec": 0, 00:20:44.329 "r_mbytes_per_sec": 0, 00:20:44.329 "w_mbytes_per_sec": 0 00:20:44.329 }, 00:20:44.329 "claimed": false, 00:20:44.329 "zoned": false, 00:20:44.329 "supported_io_types": { 00:20:44.329 "read": true, 00:20:44.329 "write": true, 00:20:44.329 "unmap": true, 00:20:44.329 "flush": false, 00:20:44.329 "reset": true, 00:20:44.329 "nvme_admin": false, 00:20:44.329 "nvme_io": false, 00:20:44.329 "nvme_io_md": false, 00:20:44.329 "write_zeroes": true, 00:20:44.330 "zcopy": false, 00:20:44.330 "get_zone_info": false, 00:20:44.330 "zone_management": false, 00:20:44.330 "zone_append": false, 00:20:44.330 "compare": false, 00:20:44.330 "compare_and_write": false, 00:20:44.330 "abort": false, 00:20:44.330 "seek_hole": true, 00:20:44.330 "seek_data": true, 00:20:44.330 "copy": false, 00:20:44.330 "nvme_iov_md": false 00:20:44.330 }, 00:20:44.330 "driver_specific": { 00:20:44.330 "lvol": { 00:20:44.330 "lvol_store_uuid": "8fbf820d-d1ef-4174-8b73-59d0b61e32ee", 00:20:44.330 "base_bdev": "nvme0n1", 00:20:44.330 "thin_provision": true, 00:20:44.330 "num_allocated_clusters": 0, 00:20:44.330 "snapshot": false, 00:20:44.330 "clone": false, 00:20:44.330 "esnap_clone": false 00:20:44.330 } 00:20:44.330 } 00:20:44.330 } 00:20:44.330 ]' 00:20:44.330 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:44.330 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:44.330 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:44.330 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:44.330 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:44.330 09:52:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:44.330 09:52:31 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:44.330 09:52:31 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:44.330 09:52:31 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:44.590 09:52:32 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:44.590 09:52:32 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:44.590 09:52:32 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.590 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.591 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:44.591 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:44.591 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:44.591 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:44.852 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:44.852 { 00:20:44.852 "name": "cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982", 00:20:44.852 "aliases": [ 00:20:44.852 "lvs/nvme0n1p0" 00:20:44.852 ], 00:20:44.852 "product_name": "Logical Volume", 00:20:44.852 "block_size": 4096, 00:20:44.852 "num_blocks": 26476544, 00:20:44.852 "uuid": "cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982", 00:20:44.852 "assigned_rate_limits": { 00:20:44.852 "rw_ios_per_sec": 0, 00:20:44.852 "rw_mbytes_per_sec": 0, 00:20:44.852 "r_mbytes_per_sec": 0, 00:20:44.852 "w_mbytes_per_sec": 0 00:20:44.852 }, 00:20:44.852 "claimed": false, 00:20:44.852 "zoned": false, 00:20:44.852 "supported_io_types": { 00:20:44.852 "read": true, 00:20:44.852 "write": true, 00:20:44.852 "unmap": true, 00:20:44.852 "flush": false, 00:20:44.852 "reset": true, 00:20:44.852 "nvme_admin": false, 00:20:44.852 "nvme_io": false, 00:20:44.852 "nvme_io_md": false, 00:20:44.852 "write_zeroes": true, 00:20:44.852 "zcopy": false, 00:20:44.852 "get_zone_info": false, 00:20:44.852 "zone_management": false, 00:20:44.852 "zone_append": false, 00:20:44.852 "compare": false, 00:20:44.852 "compare_and_write": false, 00:20:44.852 "abort": false, 00:20:44.852 "seek_hole": true, 00:20:44.852 "seek_data": true, 00:20:44.852 "copy": false, 00:20:44.852 "nvme_iov_md": false 00:20:44.852 }, 00:20:44.852 "driver_specific": { 00:20:44.852 "lvol": { 00:20:44.852 "lvol_store_uuid": "8fbf820d-d1ef-4174-8b73-59d0b61e32ee", 00:20:44.852 "base_bdev": "nvme0n1", 00:20:44.852 "thin_provision": true, 00:20:44.852 "num_allocated_clusters": 0, 00:20:44.852 "snapshot": false, 00:20:44.852 "clone": false, 00:20:44.852 "esnap_clone": false 00:20:44.852 } 00:20:44.852 } 00:20:44.852 } 00:20:44.852 ]' 00:20:44.852 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:44.852 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:44.852 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:44.852 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:44.852 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:44.852 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:44.852 09:52:32 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:44.852 09:52:32 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:45.112 09:52:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:45.112 09:52:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:45.113 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:45.113 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:45.113 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:45.113 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:45.113 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 00:20:45.373 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:45.373 { 00:20:45.373 "name": "cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982", 00:20:45.373 "aliases": [ 00:20:45.373 "lvs/nvme0n1p0" 00:20:45.373 ], 00:20:45.373 "product_name": "Logical Volume", 00:20:45.373 "block_size": 4096, 00:20:45.373 "num_blocks": 26476544, 00:20:45.373 "uuid": "cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982", 00:20:45.373 "assigned_rate_limits": { 00:20:45.373 "rw_ios_per_sec": 0, 00:20:45.373 "rw_mbytes_per_sec": 0, 00:20:45.373 "r_mbytes_per_sec": 0, 00:20:45.373 "w_mbytes_per_sec": 0 00:20:45.373 }, 00:20:45.373 "claimed": false, 00:20:45.373 "zoned": false, 00:20:45.373 "supported_io_types": { 00:20:45.373 "read": true, 00:20:45.373 "write": true, 00:20:45.373 "unmap": true, 00:20:45.373 "flush": false, 00:20:45.373 "reset": true, 00:20:45.373 "nvme_admin": false, 00:20:45.373 "nvme_io": false, 00:20:45.373 "nvme_io_md": false, 00:20:45.373 "write_zeroes": true, 00:20:45.373 "zcopy": false, 00:20:45.373 "get_zone_info": false, 00:20:45.373 "zone_management": false, 00:20:45.373 "zone_append": false, 00:20:45.373 "compare": false, 00:20:45.373 "compare_and_write": false, 00:20:45.373 "abort": false, 00:20:45.373 "seek_hole": true, 00:20:45.373 "seek_data": true, 00:20:45.373 "copy": false, 00:20:45.373 "nvme_iov_md": false 00:20:45.373 }, 00:20:45.373 "driver_specific": { 00:20:45.373 "lvol": { 00:20:45.373 "lvol_store_uuid": "8fbf820d-d1ef-4174-8b73-59d0b61e32ee", 00:20:45.373 "base_bdev": "nvme0n1", 00:20:45.373 "thin_provision": true, 00:20:45.373 "num_allocated_clusters": 0, 00:20:45.373 "snapshot": false, 00:20:45.373 "clone": false, 00:20:45.373 "esnap_clone": false 00:20:45.373 } 00:20:45.373 } 00:20:45.373 } 00:20:45.373 ]' 00:20:45.373 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:45.373 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:45.373 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:45.373 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:45.373 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:45.373 09:52:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:45.373 09:52:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:45.373 09:52:32 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 --l2p_dram_limit 10' 00:20:45.373 09:52:32 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:45.373 09:52:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:45.373 09:52:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:45.373 09:52:32 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:45.373 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:45.373 09:52:32 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cd3c5ecc-05d4-4bf6-adbf-0fab7b2eb982 --l2p_dram_limit 10 -c nvc0n1p0 00:20:45.636 [2024-12-05 09:52:33.022884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.023017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:45.636 [2024-12-05 09:52:33.023036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:45.636 [2024-12-05 09:52:33.023044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.636 [2024-12-05 09:52:33.023095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.023103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.636 [2024-12-05 09:52:33.023111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:45.636 [2024-12-05 09:52:33.023117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.636 [2024-12-05 09:52:33.023137] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:45.636 [2024-12-05 09:52:33.023734] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:45.636 [2024-12-05 09:52:33.023752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.023758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.636 [2024-12-05 09:52:33.023766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:20:45.636 [2024-12-05 09:52:33.023772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.636 [2024-12-05 09:52:33.023823] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b77a6074-e442-45aa-b19f-66a48a40d8dc 00:20:45.636 [2024-12-05 09:52:33.024794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.024819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:45.636 [2024-12-05 09:52:33.024828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:45.636 [2024-12-05 09:52:33.024836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.636 [2024-12-05 09:52:33.029562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.029675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.636 [2024-12-05 09:52:33.029687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:20:45.636 [2024-12-05 09:52:33.029695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.636 [2024-12-05 09:52:33.029764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.029773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.636 [2024-12-05 09:52:33.029780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:45.636 [2024-12-05 09:52:33.029790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.636 [2024-12-05 09:52:33.029827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.029836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:45.636 [2024-12-05 09:52:33.029844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:45.636 [2024-12-05 09:52:33.029852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.636 [2024-12-05 09:52:33.029867] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:45.636 [2024-12-05 09:52:33.032749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.032844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.636 [2024-12-05 09:52:33.032860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.885 ms 00:20:45.636 [2024-12-05 09:52:33.032867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.636 [2024-12-05 09:52:33.032895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.636 [2024-12-05 09:52:33.032901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:45.637 [2024-12-05 09:52:33.032909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:45.637 [2024-12-05 09:52:33.032915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.637 [2024-12-05 09:52:33.032935] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:45.637 [2024-12-05 09:52:33.033044] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:45.637 [2024-12-05 09:52:33.033056] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:45.637 [2024-12-05 09:52:33.033065] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:45.637 [2024-12-05 09:52:33.033074] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033081] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033088] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:45.637 [2024-12-05 09:52:33.033095] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:45.637 [2024-12-05 09:52:33.033104] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:45.637 [2024-12-05 09:52:33.033110] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:45.637 [2024-12-05 09:52:33.033117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.637 [2024-12-05 09:52:33.033128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:45.637 [2024-12-05 09:52:33.033136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:20:45.637 [2024-12-05 09:52:33.033142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.637 [2024-12-05 09:52:33.033208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.637 [2024-12-05 09:52:33.033215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:45.637 [2024-12-05 09:52:33.033222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:45.637 [2024-12-05 09:52:33.033228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.637 [2024-12-05 09:52:33.033306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:45.637 [2024-12-05 09:52:33.033314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:45.637 [2024-12-05 09:52:33.033321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:45.637 [2024-12-05 09:52:33.033340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:45.637 [2024-12-05 09:52:33.033361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.637 [2024-12-05 09:52:33.033374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:45.637 [2024-12-05 09:52:33.033379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:45.637 [2024-12-05 09:52:33.033387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.637 [2024-12-05 09:52:33.033392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:45.637 [2024-12-05 09:52:33.033399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:45.637 [2024-12-05 09:52:33.033405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:45.637 [2024-12-05 09:52:33.033419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:45.637 [2024-12-05 09:52:33.033437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:45.637 [2024-12-05 09:52:33.033455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:45.637 [2024-12-05 09:52:33.033472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:45.637 [2024-12-05 09:52:33.033489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:45.637 [2024-12-05 09:52:33.033517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.637 [2024-12-05 09:52:33.033530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:45.637 [2024-12-05 09:52:33.033535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:45.637 [2024-12-05 09:52:33.033543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.637 [2024-12-05 09:52:33.033548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:45.637 [2024-12-05 09:52:33.033555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:45.637 [2024-12-05 09:52:33.033560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:45.637 [2024-12-05 09:52:33.033573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:45.637 [2024-12-05 09:52:33.033580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033585] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:45.637 [2024-12-05 09:52:33.033592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:45.637 [2024-12-05 09:52:33.033597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.637 [2024-12-05 09:52:33.033612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:45.637 [2024-12-05 09:52:33.033620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:45.637 [2024-12-05 09:52:33.033626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:45.637 [2024-12-05 09:52:33.033633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:45.637 [2024-12-05 09:52:33.033638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:45.637 [2024-12-05 09:52:33.033644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:45.637 [2024-12-05 09:52:33.033651] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:45.637 [2024-12-05 09:52:33.033661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.637 [2024-12-05 09:52:33.033668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:45.637 [2024-12-05 09:52:33.033675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:45.637 [2024-12-05 09:52:33.033680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:45.637 [2024-12-05 09:52:33.033686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:45.637 [2024-12-05 09:52:33.033693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:45.637 [2024-12-05 09:52:33.033700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:45.637 [2024-12-05 09:52:33.033705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:45.637 [2024-12-05 09:52:33.033713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:45.637 [2024-12-05 09:52:33.033718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:45.637 [2024-12-05 09:52:33.033726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:45.637 [2024-12-05 09:52:33.033732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:45.637 [2024-12-05 09:52:33.033741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:45.637 [2024-12-05 09:52:33.033746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:45.637 [2024-12-05 09:52:33.033754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:45.637 [2024-12-05 09:52:33.033759] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:45.637 [2024-12-05 09:52:33.033767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.637 [2024-12-05 09:52:33.033773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:45.637 [2024-12-05 09:52:33.033780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:45.637 [2024-12-05 09:52:33.033785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:45.637 [2024-12-05 09:52:33.033793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:45.637 [2024-12-05 09:52:33.033799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.637 [2024-12-05 09:52:33.033806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:45.637 [2024-12-05 09:52:33.033811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:20:45.638 [2024-12-05 09:52:33.033818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.638 [2024-12-05 09:52:33.033857] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:45.638 [2024-12-05 09:52:33.033869] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:48.938 [2024-12-05 09:52:36.496868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.938 [2024-12-05 09:52:36.497207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:48.938 [2024-12-05 09:52:36.497302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3462.992 ms 00:20:48.938 [2024-12-05 09:52:36.497333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.938 [2024-12-05 09:52:36.529743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.938 [2024-12-05 09:52:36.530000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.938 [2024-12-05 09:52:36.530192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.103 ms 00:20:48.938 [2024-12-05 09:52:36.530226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.938 [2024-12-05 09:52:36.530392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.938 [2024-12-05 09:52:36.530696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:48.938 [2024-12-05 09:52:36.530728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:48.938 [2024-12-05 09:52:36.530759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.566784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.567005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.199 [2024-12-05 09:52:36.567122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.954 ms 00:20:49.199 [2024-12-05 09:52:36.567154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.567206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.567241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.199 [2024-12-05 09:52:36.567263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:49.199 [2024-12-05 09:52:36.567293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.567887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.568090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.199 [2024-12-05 09:52:36.568157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:20:49.199 [2024-12-05 09:52:36.568185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.568318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.568346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.199 [2024-12-05 09:52:36.568370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:49.199 [2024-12-05 09:52:36.568394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.586178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.586373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.199 [2024-12-05 09:52:36.586505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.698 ms 00:20:49.199 [2024-12-05 09:52:36.586569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.619653] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:49.199 [2024-12-05 09:52:36.623728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.623903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:49.199 [2024-12-05 09:52:36.624000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.030 ms 00:20:49.199 [2024-12-05 09:52:36.624027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.717370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.717618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:49.199 [2024-12-05 09:52:36.717702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.274 ms 00:20:49.199 [2024-12-05 09:52:36.717729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.717946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.717982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:49.199 [2024-12-05 09:52:36.718062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:20:49.199 [2024-12-05 09:52:36.718086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.744850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.745038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:49.199 [2024-12-05 09:52:36.745104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.683 ms 00:20:49.199 [2024-12-05 09:52:36.745128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.771063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.771250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:49.199 [2024-12-05 09:52:36.771330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.868 ms 00:20:49.199 [2024-12-05 09:52:36.771351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.199 [2024-12-05 09:52:36.772026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.199 [2024-12-05 09:52:36.772148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:49.199 [2024-12-05 09:52:36.772226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:20:49.199 [2024-12-05 09:52:36.772256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.460 [2024-12-05 09:52:36.857310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.460 [2024-12-05 09:52:36.857503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:49.460 [2024-12-05 09:52:36.857611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.971 ms 00:20:49.460 [2024-12-05 09:52:36.857638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.460 [2024-12-05 09:52:36.886254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.460 [2024-12-05 09:52:36.886449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:49.460 [2024-12-05 09:52:36.886477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.410 ms 00:20:49.460 [2024-12-05 09:52:36.886486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.460 [2024-12-05 09:52:36.913753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.460 [2024-12-05 09:52:36.913944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:49.460 [2024-12-05 09:52:36.913971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.194 ms 00:20:49.460 [2024-12-05 09:52:36.913980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.460 [2024-12-05 09:52:36.941529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.460 [2024-12-05 09:52:36.941581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:49.460 [2024-12-05 09:52:36.941598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.500 ms 00:20:49.460 [2024-12-05 09:52:36.941607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.460 [2024-12-05 09:52:36.941670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.460 [2024-12-05 09:52:36.941681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:49.460 [2024-12-05 09:52:36.941696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:49.460 [2024-12-05 09:52:36.941704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.460 [2024-12-05 09:52:36.941816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.460 [2024-12-05 09:52:36.941838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:49.460 [2024-12-05 09:52:36.941850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:49.460 [2024-12-05 09:52:36.941858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.460 [2024-12-05 09:52:36.943059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3919.634 ms, result 0 00:20:49.460 { 00:20:49.460 "name": "ftl0", 00:20:49.460 "uuid": "b77a6074-e442-45aa-b19f-66a48a40d8dc" 00:20:49.460 } 00:20:49.460 09:52:36 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:49.460 09:52:36 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:49.722 09:52:37 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:49.722 09:52:37 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:49.986 [2024-12-05 09:52:37.390377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.390449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:49.986 [2024-12-05 09:52:37.390465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:49.986 [2024-12-05 09:52:37.390476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.390502] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:49.986 [2024-12-05 09:52:37.393699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.393745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:49.986 [2024-12-05 09:52:37.393760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:20:49.986 [2024-12-05 09:52:37.393768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.394059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.394077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:49.986 [2024-12-05 09:52:37.394090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:20:49.986 [2024-12-05 09:52:37.394100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.397362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.397577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:49.986 [2024-12-05 09:52:37.397601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:20:49.986 [2024-12-05 09:52:37.397610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.403965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.404012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:49.986 [2024-12-05 09:52:37.404029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.301 ms 00:20:49.986 [2024-12-05 09:52:37.404038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.431384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.431438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:49.986 [2024-12-05 09:52:37.431454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.248 ms 00:20:49.986 [2024-12-05 09:52:37.431463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.450363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.450415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:49.986 [2024-12-05 09:52:37.450431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.812 ms 00:20:49.986 [2024-12-05 09:52:37.450440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.450657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.450673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:49.986 [2024-12-05 09:52:37.450686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:20:49.986 [2024-12-05 09:52:37.450694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.477443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.477492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:49.986 [2024-12-05 09:52:37.477526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.719 ms 00:20:49.986 [2024-12-05 09:52:37.477535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.503904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.503964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:49.986 [2024-12-05 09:52:37.503980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.308 ms 00:20:49.986 [2024-12-05 09:52:37.503988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.530195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.530244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:49.986 [2024-12-05 09:52:37.530258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.142 ms 00:20:49.986 [2024-12-05 09:52:37.530266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.556882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.986 [2024-12-05 09:52:37.556931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:49.986 [2024-12-05 09:52:37.556946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.495 ms 00:20:49.986 [2024-12-05 09:52:37.556954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.986 [2024-12-05 09:52:37.557063] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:49.986 [2024-12-05 09:52:37.557079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:49.986 [2024-12-05 09:52:37.557298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.557998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:49.987 [2024-12-05 09:52:37.558011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:49.988 [2024-12-05 09:52:37.558113] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:49.988 [2024-12-05 09:52:37.558123] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b77a6074-e442-45aa-b19f-66a48a40d8dc 00:20:49.988 [2024-12-05 09:52:37.558132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:49.988 [2024-12-05 09:52:37.558143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:49.988 [2024-12-05 09:52:37.558154] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:49.988 [2024-12-05 09:52:37.558164] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:49.988 [2024-12-05 09:52:37.558171] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:49.988 [2024-12-05 09:52:37.558183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:49.988 [2024-12-05 09:52:37.558190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:49.988 [2024-12-05 09:52:37.558199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:49.988 [2024-12-05 09:52:37.558206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:49.988 [2024-12-05 09:52:37.558217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.988 [2024-12-05 09:52:37.558226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:49.988 [2024-12-05 09:52:37.558237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:20:49.988 [2024-12-05 09:52:37.558247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.988 [2024-12-05 09:52:37.572604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.988 [2024-12-05 09:52:37.572651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:49.988 [2024-12-05 09:52:37.572667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.305 ms 00:20:49.988 [2024-12-05 09:52:37.572676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.988 [2024-12-05 09:52:37.573085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.988 [2024-12-05 09:52:37.573108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:49.988 [2024-12-05 09:52:37.573122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:20:49.988 [2024-12-05 09:52:37.573130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.619874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.620114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.250 [2024-12-05 09:52:37.620143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.620153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.620231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.620242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.250 [2024-12-05 09:52:37.620256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.620265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.620368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.620381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.250 [2024-12-05 09:52:37.620392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.620399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.620423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.620432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.250 [2024-12-05 09:52:37.620442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.620452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.705541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.705597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.250 [2024-12-05 09:52:37.705613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.705621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.774966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.775211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.250 [2024-12-05 09:52:37.775237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.775249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.775337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.775349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.250 [2024-12-05 09:52:37.775360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.775368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.775440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.775453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.250 [2024-12-05 09:52:37.775464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.775472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.775622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.775637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.250 [2024-12-05 09:52:37.775648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.775656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.775695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.775704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:50.250 [2024-12-05 09:52:37.775716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.775724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.775772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.775783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.250 [2024-12-05 09:52:37.775794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.775802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.775855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.250 [2024-12-05 09:52:37.775867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.250 [2024-12-05 09:52:37.775878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.250 [2024-12-05 09:52:37.775886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.250 [2024-12-05 09:52:37.776051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.639 ms, result 0 00:20:50.250 true 00:20:50.250 09:52:37 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77197 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77197 ']' 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77197 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77197 00:20:50.250 killing process with pid 77197 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77197' 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77197 00:20:50.250 09:52:37 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77197 00:20:56.836 09:52:44 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:01.038 262144+0 records in 00:21:01.038 262144+0 records out 00:21:01.038 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.58474 s, 300 MB/s 00:21:01.038 09:52:47 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:02.426 09:52:49 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:02.426 [2024-12-05 09:52:49.702233] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:02.426 [2024-12-05 09:52:49.702432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77429 ] 00:21:02.426 [2024-12-05 09:52:49.857835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.426 [2024-12-05 09:52:49.973043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.688 [2024-12-05 09:52:50.270847] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.688 [2024-12-05 09:52:50.270938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.950 [2024-12-05 09:52:50.432951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.433020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:02.950 [2024-12-05 09:52:50.433036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:02.950 [2024-12-05 09:52:50.433044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.433097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.433109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.950 [2024-12-05 09:52:50.433118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:02.950 [2024-12-05 09:52:50.433127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.433147] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:02.950 [2024-12-05 09:52:50.434063] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:02.950 [2024-12-05 09:52:50.434108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.434117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.950 [2024-12-05 09:52:50.434128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:21:02.950 [2024-12-05 09:52:50.434136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.435819] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:02.950 [2024-12-05 09:52:50.450446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.450541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:02.950 [2024-12-05 09:52:50.450558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.627 ms 00:21:02.950 [2024-12-05 09:52:50.450569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.450672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.450688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:02.950 [2024-12-05 09:52:50.450699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:02.950 [2024-12-05 09:52:50.450707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.460127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.460173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.950 [2024-12-05 09:52:50.460185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.334 ms 00:21:02.950 [2024-12-05 09:52:50.460200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.460284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.460293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.950 [2024-12-05 09:52:50.460305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:02.950 [2024-12-05 09:52:50.460314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.460363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.460374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:02.950 [2024-12-05 09:52:50.460383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:02.950 [2024-12-05 09:52:50.460391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.460419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:02.950 [2024-12-05 09:52:50.464675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.464720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.950 [2024-12-05 09:52:50.464734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.263 ms 00:21:02.950 [2024-12-05 09:52:50.464743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.464783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.950 [2024-12-05 09:52:50.464792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:02.950 [2024-12-05 09:52:50.464802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:02.950 [2024-12-05 09:52:50.464810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.950 [2024-12-05 09:52:50.464864] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:02.950 [2024-12-05 09:52:50.464890] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:02.950 [2024-12-05 09:52:50.464929] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:02.950 [2024-12-05 09:52:50.464950] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:02.950 [2024-12-05 09:52:50.465057] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:02.950 [2024-12-05 09:52:50.465071] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:02.950 [2024-12-05 09:52:50.465082] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:02.950 [2024-12-05 09:52:50.465094] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:02.950 [2024-12-05 09:52:50.465103] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:02.950 [2024-12-05 09:52:50.465113] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:02.950 [2024-12-05 09:52:50.465121] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:02.950 [2024-12-05 09:52:50.465132] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:02.950 [2024-12-05 09:52:50.465141] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:02.951 [2024-12-05 09:52:50.465151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.951 [2024-12-05 09:52:50.465159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:02.951 [2024-12-05 09:52:50.465168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:21:02.951 [2024-12-05 09:52:50.465178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.951 [2024-12-05 09:52:50.465264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.951 [2024-12-05 09:52:50.465275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:02.951 [2024-12-05 09:52:50.465284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:02.951 [2024-12-05 09:52:50.465291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.951 [2024-12-05 09:52:50.465399] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:02.951 [2024-12-05 09:52:50.465410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:02.951 [2024-12-05 09:52:50.465419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:02.951 [2024-12-05 09:52:50.465445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:02.951 [2024-12-05 09:52:50.465469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.951 [2024-12-05 09:52:50.465485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:02.951 [2024-12-05 09:52:50.465493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:02.951 [2024-12-05 09:52:50.465500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.951 [2024-12-05 09:52:50.465533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:02.951 [2024-12-05 09:52:50.465541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:02.951 [2024-12-05 09:52:50.465548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:02.951 [2024-12-05 09:52:50.465568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:02.951 [2024-12-05 09:52:50.465590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:02.951 [2024-12-05 09:52:50.465615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:02.951 [2024-12-05 09:52:50.465636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:02.951 [2024-12-05 09:52:50.465659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:02.951 [2024-12-05 09:52:50.465680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.951 [2024-12-05 09:52:50.465695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:02.951 [2024-12-05 09:52:50.465704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:02.951 [2024-12-05 09:52:50.465711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.951 [2024-12-05 09:52:50.465718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:02.951 [2024-12-05 09:52:50.465725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:02.951 [2024-12-05 09:52:50.465731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:02.951 [2024-12-05 09:52:50.465746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:02.951 [2024-12-05 09:52:50.465755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465762] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:02.951 [2024-12-05 09:52:50.465770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:02.951 [2024-12-05 09:52:50.465777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.951 [2024-12-05 09:52:50.465793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:02.951 [2024-12-05 09:52:50.465801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:02.951 [2024-12-05 09:52:50.465808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:02.951 [2024-12-05 09:52:50.465815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:02.951 [2024-12-05 09:52:50.465821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:02.951 [2024-12-05 09:52:50.465829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:02.951 [2024-12-05 09:52:50.465846] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:02.951 [2024-12-05 09:52:50.465856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.951 [2024-12-05 09:52:50.465868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:02.951 [2024-12-05 09:52:50.465876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:02.951 [2024-12-05 09:52:50.465884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:02.951 [2024-12-05 09:52:50.465892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:02.951 [2024-12-05 09:52:50.465900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:02.951 [2024-12-05 09:52:50.465907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:02.951 [2024-12-05 09:52:50.465915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:02.951 [2024-12-05 09:52:50.465922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:02.951 [2024-12-05 09:52:50.465930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:02.951 [2024-12-05 09:52:50.465939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:02.951 [2024-12-05 09:52:50.465947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:02.951 [2024-12-05 09:52:50.465954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:02.951 [2024-12-05 09:52:50.465962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:02.951 [2024-12-05 09:52:50.465970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:02.951 [2024-12-05 09:52:50.465978] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:02.951 [2024-12-05 09:52:50.465987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.951 [2024-12-05 09:52:50.465996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:02.951 [2024-12-05 09:52:50.466004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:02.951 [2024-12-05 09:52:50.466011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:02.951 [2024-12-05 09:52:50.466021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:02.951 [2024-12-05 09:52:50.466029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.951 [2024-12-05 09:52:50.466037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:02.951 [2024-12-05 09:52:50.466045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:21:02.951 [2024-12-05 09:52:50.466053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.951 [2024-12-05 09:52:50.498758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.951 [2024-12-05 09:52:50.498813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.951 [2024-12-05 09:52:50.498827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.657 ms 00:21:02.951 [2024-12-05 09:52:50.498839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.951 [2024-12-05 09:52:50.498932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.951 [2024-12-05 09:52:50.498941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.951 [2024-12-05 09:52:50.498951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:02.951 [2024-12-05 09:52:50.498959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.951 [2024-12-05 09:52:50.547501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.951 [2024-12-05 09:52:50.547577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.951 [2024-12-05 09:52:50.547591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.478 ms 00:21:02.951 [2024-12-05 09:52:50.547600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.951 [2024-12-05 09:52:50.547655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.952 [2024-12-05 09:52:50.547666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.952 [2024-12-05 09:52:50.547679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:02.952 [2024-12-05 09:52:50.547687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.952 [2024-12-05 09:52:50.548355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.952 [2024-12-05 09:52:50.548395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.952 [2024-12-05 09:52:50.548409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:21:02.952 [2024-12-05 09:52:50.548418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.952 [2024-12-05 09:52:50.548600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.952 [2024-12-05 09:52:50.548614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.952 [2024-12-05 09:52:50.548630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:21:02.952 [2024-12-05 09:52:50.548638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.952 [2024-12-05 09:52:50.564508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.952 [2024-12-05 09:52:50.564575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.952 [2024-12-05 09:52:50.564586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.848 ms 00:21:02.952 [2024-12-05 09:52:50.564595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-12-05 09:52:50.579250] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:03.213 [2024-12-05 09:52:50.579312] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:03.213 [2024-12-05 09:52:50.579329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-12-05 09:52:50.579339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:03.213 [2024-12-05 09:52:50.579351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.620 ms 00:21:03.213 [2024-12-05 09:52:50.579360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-12-05 09:52:50.605244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-12-05 09:52:50.605306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:03.213 [2024-12-05 09:52:50.605319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.825 ms 00:21:03.213 [2024-12-05 09:52:50.605328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-12-05 09:52:50.618409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-12-05 09:52:50.618461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:03.213 [2024-12-05 09:52:50.618473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.023 ms 00:21:03.214 [2024-12-05 09:52:50.618481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.631020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.631072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:03.214 [2024-12-05 09:52:50.631085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.475 ms 00:21:03.214 [2024-12-05 09:52:50.631093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.631764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.631801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:03.214 [2024-12-05 09:52:50.631813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:21:03.214 [2024-12-05 09:52:50.631825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.698054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.698126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:03.214 [2024-12-05 09:52:50.698145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.207 ms 00:21:03.214 [2024-12-05 09:52:50.698162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.709121] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:03.214 [2024-12-05 09:52:50.712192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.712237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:03.214 [2024-12-05 09:52:50.712251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.968 ms 00:21:03.214 [2024-12-05 09:52:50.712259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.712348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.712360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:03.214 [2024-12-05 09:52:50.712371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:03.214 [2024-12-05 09:52:50.712380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.712459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.712471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:03.214 [2024-12-05 09:52:50.712481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:03.214 [2024-12-05 09:52:50.712488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.712527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.712537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:03.214 [2024-12-05 09:52:50.712548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:03.214 [2024-12-05 09:52:50.712556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.712595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:03.214 [2024-12-05 09:52:50.712609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.712618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:03.214 [2024-12-05 09:52:50.712626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:03.214 [2024-12-05 09:52:50.712635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.738642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.738698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:03.214 [2024-12-05 09:52:50.738713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.987 ms 00:21:03.214 [2024-12-05 09:52:50.738728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.738813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-12-05 09:52:50.738824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:03.214 [2024-12-05 09:52:50.738834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:03.214 [2024-12-05 09:52:50.738843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-12-05 09:52:50.741535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.070 ms, result 0 00:21:04.158  [2024-12-05T09:52:53.175Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-05T09:52:54.119Z] Copying: 49/1024 [MB] (38 MBps) [2024-12-05T09:52:55.065Z] Copying: 80/1024 [MB] (30 MBps) [2024-12-05T09:52:56.025Z] Copying: 101/1024 [MB] (21 MBps) [2024-12-05T09:52:56.971Z] Copying: 111/1024 [MB] (10 MBps) [2024-12-05T09:52:57.914Z] Copying: 122/1024 [MB] (10 MBps) [2024-12-05T09:52:58.921Z] Copying: 132/1024 [MB] (10 MBps) [2024-12-05T09:52:59.864Z] Copying: 153/1024 [MB] (21 MBps) [2024-12-05T09:53:00.811Z] Copying: 173/1024 [MB] (20 MBps) [2024-12-05T09:53:01.755Z] Copying: 196/1024 [MB] (22 MBps) [2024-12-05T09:53:03.144Z] Copying: 217/1024 [MB] (21 MBps) [2024-12-05T09:53:04.090Z] Copying: 236/1024 [MB] (18 MBps) [2024-12-05T09:53:05.033Z] Copying: 250/1024 [MB] (14 MBps) [2024-12-05T09:53:05.974Z] Copying: 277/1024 [MB] (27 MBps) [2024-12-05T09:53:06.918Z] Copying: 288/1024 [MB] (10 MBps) [2024-12-05T09:53:07.859Z] Copying: 298/1024 [MB] (10 MBps) [2024-12-05T09:53:08.803Z] Copying: 339/1024 [MB] (40 MBps) [2024-12-05T09:53:10.191Z] Copying: 362/1024 [MB] (22 MBps) [2024-12-05T09:53:10.764Z] Copying: 374/1024 [MB] (11 MBps) [2024-12-05T09:53:12.150Z] Copying: 391/1024 [MB] (17 MBps) [2024-12-05T09:53:13.094Z] Copying: 404/1024 [MB] (13 MBps) [2024-12-05T09:53:14.038Z] Copying: 416/1024 [MB] (12 MBps) [2024-12-05T09:53:14.981Z] Copying: 431/1024 [MB] (14 MBps) [2024-12-05T09:53:15.922Z] Copying: 468/1024 [MB] (37 MBps) [2024-12-05T09:53:16.865Z] Copying: 493/1024 [MB] (24 MBps) [2024-12-05T09:53:17.806Z] Copying: 507/1024 [MB] (14 MBps) [2024-12-05T09:53:19.186Z] Copying: 525/1024 [MB] (18 MBps) [2024-12-05T09:53:19.758Z] Copying: 541/1024 [MB] (15 MBps) [2024-12-05T09:53:21.151Z] Copying: 552/1024 [MB] (10 MBps) [2024-12-05T09:53:22.096Z] Copying: 562/1024 [MB] (10 MBps) [2024-12-05T09:53:23.041Z] Copying: 577/1024 [MB] (14 MBps) [2024-12-05T09:53:23.986Z] Copying: 599/1024 [MB] (22 MBps) [2024-12-05T09:53:24.946Z] Copying: 612/1024 [MB] (12 MBps) [2024-12-05T09:53:25.889Z] Copying: 627/1024 [MB] (15 MBps) [2024-12-05T09:53:26.834Z] Copying: 645/1024 [MB] (17 MBps) [2024-12-05T09:53:27.778Z] Copying: 663/1024 [MB] (18 MBps) [2024-12-05T09:53:29.167Z] Copying: 677/1024 [MB] (14 MBps) [2024-12-05T09:53:30.134Z] Copying: 687/1024 [MB] (10 MBps) [2024-12-05T09:53:30.765Z] Copying: 698/1024 [MB] (10 MBps) [2024-12-05T09:53:32.152Z] Copying: 708/1024 [MB] (10 MBps) [2024-12-05T09:53:33.099Z] Copying: 719/1024 [MB] (10 MBps) [2024-12-05T09:53:34.042Z] Copying: 729/1024 [MB] (10 MBps) [2024-12-05T09:53:34.985Z] Copying: 740/1024 [MB] (10 MBps) [2024-12-05T09:53:35.939Z] Copying: 750/1024 [MB] (10 MBps) [2024-12-05T09:53:36.883Z] Copying: 779/1024 [MB] (29 MBps) [2024-12-05T09:53:37.825Z] Copying: 817/1024 [MB] (38 MBps) [2024-12-05T09:53:38.770Z] Copying: 829/1024 [MB] (12 MBps) [2024-12-05T09:53:40.156Z] Copying: 842/1024 [MB] (12 MBps) [2024-12-05T09:53:41.100Z] Copying: 853/1024 [MB] (10 MBps) [2024-12-05T09:53:42.044Z] Copying: 868/1024 [MB] (15 MBps) [2024-12-05T09:53:42.989Z] Copying: 899608/1048576 [kB] (10096 kBps) [2024-12-05T09:53:43.934Z] Copying: 895/1024 [MB] (16 MBps) [2024-12-05T09:53:44.879Z] Copying: 910/1024 [MB] (14 MBps) [2024-12-05T09:53:45.820Z] Copying: 920/1024 [MB] (10 MBps) [2024-12-05T09:53:46.763Z] Copying: 938/1024 [MB] (18 MBps) [2024-12-05T09:53:48.149Z] Copying: 969/1024 [MB] (30 MBps) [2024-12-05T09:53:49.093Z] Copying: 994/1024 [MB] (24 MBps) [2024-12-05T09:53:50.038Z] Copying: 1004/1024 [MB] (10 MBps) [2024-12-05T09:53:50.038Z] Copying: 1019/1024 [MB] (15 MBps) [2024-12-05T09:53:50.038Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-05 09:53:49.914421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.914455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:02.409 [2024-12-05 09:53:49.914466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:02.409 [2024-12-05 09:53:49.914473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:49.914488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:02.409 [2024-12-05 09:53:49.916650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.916674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:02.409 [2024-12-05 09:53:49.916690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:22:02.409 [2024-12-05 09:53:49.916697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:49.918152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.918175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:02.409 [2024-12-05 09:53:49.918182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:22:02.409 [2024-12-05 09:53:49.918188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:49.931485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.931519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:02.409 [2024-12-05 09:53:49.931527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.285 ms 00:22:02.409 [2024-12-05 09:53:49.931533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:49.936330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.936353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:02.409 [2024-12-05 09:53:49.936360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.770 ms 00:22:02.409 [2024-12-05 09:53:49.936367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:49.954587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.954610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:02.409 [2024-12-05 09:53:49.954618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.183 ms 00:22:02.409 [2024-12-05 09:53:49.954625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:49.965762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.965788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:02.409 [2024-12-05 09:53:49.965796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.112 ms 00:22:02.409 [2024-12-05 09:53:49.965803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:49.965892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.965902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:02.409 [2024-12-05 09:53:49.965909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:02.409 [2024-12-05 09:53:49.965915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:49.983724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:49.983749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:02.409 [2024-12-05 09:53:49.983756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.798 ms 00:22:02.409 [2024-12-05 09:53:49.983762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:50.000872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:50.000897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:02.409 [2024-12-05 09:53:50.000904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.086 ms 00:22:02.409 [2024-12-05 09:53:50.000910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.409 [2024-12-05 09:53:50.018481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.409 [2024-12-05 09:53:50.018516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:02.409 [2024-12-05 09:53:50.018523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.548 ms 00:22:02.409 [2024-12-05 09:53:50.018529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.671 [2024-12-05 09:53:50.036478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.671 [2024-12-05 09:53:50.036504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:02.671 [2024-12-05 09:53:50.036519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.906 ms 00:22:02.671 [2024-12-05 09:53:50.036524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.672 [2024-12-05 09:53:50.036548] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:02.672 [2024-12-05 09:53:50.036560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.036997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:02.672 [2024-12-05 09:53:50.037071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:02.673 [2024-12-05 09:53:50.037152] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:02.673 [2024-12-05 09:53:50.037160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b77a6074-e442-45aa-b19f-66a48a40d8dc 00:22:02.673 [2024-12-05 09:53:50.037166] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:02.673 [2024-12-05 09:53:50.037172] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:02.673 [2024-12-05 09:53:50.037178] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:02.673 [2024-12-05 09:53:50.037184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:02.673 [2024-12-05 09:53:50.037190] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:02.673 [2024-12-05 09:53:50.037201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:02.673 [2024-12-05 09:53:50.037206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:02.673 [2024-12-05 09:53:50.037212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:02.673 [2024-12-05 09:53:50.037217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:02.673 [2024-12-05 09:53:50.037222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.673 [2024-12-05 09:53:50.037228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:02.673 [2024-12-05 09:53:50.037234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:22:02.673 [2024-12-05 09:53:50.037240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.047108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.673 [2024-12-05 09:53:50.047131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:02.673 [2024-12-05 09:53:50.047139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.855 ms 00:22:02.673 [2024-12-05 09:53:50.047146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.047416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.673 [2024-12-05 09:53:50.047466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:02.673 [2024-12-05 09:53:50.047473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:02.673 [2024-12-05 09:53:50.047482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.073278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.073306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.673 [2024-12-05 09:53:50.073314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.073320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.073360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.073366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.673 [2024-12-05 09:53:50.073372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.073380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.073431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.073439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.673 [2024-12-05 09:53:50.073445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.073450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.073461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.073468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.673 [2024-12-05 09:53:50.073474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.073479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.131903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.131939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.673 [2024-12-05 09:53:50.131947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.131953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.179173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.179205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.673 [2024-12-05 09:53:50.179213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.179223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.179255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.179263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.673 [2024-12-05 09:53:50.179269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.179275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.179314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.179321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.673 [2024-12-05 09:53:50.179327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.179333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.179400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.179408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.673 [2024-12-05 09:53:50.179414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.179420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.179445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.179453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:02.673 [2024-12-05 09:53:50.179459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.179464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.179491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.179501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.673 [2024-12-05 09:53:50.179507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.179525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.179557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.673 [2024-12-05 09:53:50.179565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.673 [2024-12-05 09:53:50.179571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.673 [2024-12-05 09:53:50.179576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.673 [2024-12-05 09:53:50.179669] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.219 ms, result 0 00:22:03.614 00:22:03.614 00:22:03.614 09:53:51 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:03.614 [2024-12-05 09:53:51.122427] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:03.614 [2024-12-05 09:53:51.122576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78065 ] 00:22:03.874 [2024-12-05 09:53:51.282951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.874 [2024-12-05 09:53:51.397216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.134 [2024-12-05 09:53:51.689874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:04.134 [2024-12-05 09:53:51.689967] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:04.397 [2024-12-05 09:53:51.850860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.850929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:04.397 [2024-12-05 09:53:51.850945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:04.397 [2024-12-05 09:53:51.850954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.851010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.851024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:04.397 [2024-12-05 09:53:51.851033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:04.397 [2024-12-05 09:53:51.851041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.851064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:04.397 [2024-12-05 09:53:51.852140] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:04.397 [2024-12-05 09:53:51.852273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.852304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:04.397 [2024-12-05 09:53:51.852334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:22:04.397 [2024-12-05 09:53:51.852357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.855253] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:04.397 [2024-12-05 09:53:51.872901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.872953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:04.397 [2024-12-05 09:53:51.872968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.658 ms 00:22:04.397 [2024-12-05 09:53:51.872977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.873059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.873071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:04.397 [2024-12-05 09:53:51.873080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:04.397 [2024-12-05 09:53:51.873089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.880967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.881010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:04.397 [2024-12-05 09:53:51.881021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.802 ms 00:22:04.397 [2024-12-05 09:53:51.881036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.881116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.881126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:04.397 [2024-12-05 09:53:51.881135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:04.397 [2024-12-05 09:53:51.881145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.881189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.881200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:04.397 [2024-12-05 09:53:51.881209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:04.397 [2024-12-05 09:53:51.881217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.881245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:04.397 [2024-12-05 09:53:51.885273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.885313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:04.397 [2024-12-05 09:53:51.885327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.036 ms 00:22:04.397 [2024-12-05 09:53:51.885336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.885374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.885383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:04.397 [2024-12-05 09:53:51.885393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:04.397 [2024-12-05 09:53:51.885401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.885451] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:04.397 [2024-12-05 09:53:51.885476] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:04.397 [2024-12-05 09:53:51.885528] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:04.397 [2024-12-05 09:53:51.885551] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:04.397 [2024-12-05 09:53:51.885658] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:04.397 [2024-12-05 09:53:51.885670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:04.397 [2024-12-05 09:53:51.885682] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:04.397 [2024-12-05 09:53:51.885693] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:04.397 [2024-12-05 09:53:51.885703] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:04.397 [2024-12-05 09:53:51.885711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:04.397 [2024-12-05 09:53:51.885719] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:04.397 [2024-12-05 09:53:51.885730] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:04.397 [2024-12-05 09:53:51.885739] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:04.397 [2024-12-05 09:53:51.885747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.885755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:04.397 [2024-12-05 09:53:51.885763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:22:04.397 [2024-12-05 09:53:51.885770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.885852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.397 [2024-12-05 09:53:51.885861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:04.397 [2024-12-05 09:53:51.885870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:04.397 [2024-12-05 09:53:51.885877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.397 [2024-12-05 09:53:51.885984] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:04.397 [2024-12-05 09:53:51.885995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:04.397 [2024-12-05 09:53:51.886005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:04.397 [2024-12-05 09:53:51.886014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.397 [2024-12-05 09:53:51.886023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:04.397 [2024-12-05 09:53:51.886031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:04.397 [2024-12-05 09:53:51.886037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:04.397 [2024-12-05 09:53:51.886045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:04.397 [2024-12-05 09:53:51.886053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:04.397 [2024-12-05 09:53:51.886061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:04.397 [2024-12-05 09:53:51.886068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:04.397 [2024-12-05 09:53:51.886077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:04.397 [2024-12-05 09:53:51.886086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:04.397 [2024-12-05 09:53:51.886101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:04.397 [2024-12-05 09:53:51.886110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:04.397 [2024-12-05 09:53:51.886118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.397 [2024-12-05 09:53:51.886125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:04.397 [2024-12-05 09:53:51.886133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:04.397 [2024-12-05 09:53:51.886141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.397 [2024-12-05 09:53:51.886148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:04.397 [2024-12-05 09:53:51.886157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:04.397 [2024-12-05 09:53:51.886164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:04.398 [2024-12-05 09:53:51.886171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:04.398 [2024-12-05 09:53:51.886181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:04.398 [2024-12-05 09:53:51.886187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:04.398 [2024-12-05 09:53:51.886195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:04.398 [2024-12-05 09:53:51.886202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:04.398 [2024-12-05 09:53:51.886209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:04.398 [2024-12-05 09:53:51.886216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:04.398 [2024-12-05 09:53:51.886224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:04.398 [2024-12-05 09:53:51.886231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:04.398 [2024-12-05 09:53:51.886239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:04.398 [2024-12-05 09:53:51.886246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:04.398 [2024-12-05 09:53:51.886252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:04.398 [2024-12-05 09:53:51.886259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:04.398 [2024-12-05 09:53:51.886265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:04.398 [2024-12-05 09:53:51.886272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:04.398 [2024-12-05 09:53:51.886279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:04.398 [2024-12-05 09:53:51.886286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:04.398 [2024-12-05 09:53:51.886292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.398 [2024-12-05 09:53:51.886299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:04.398 [2024-12-05 09:53:51.886307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:04.398 [2024-12-05 09:53:51.886315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.398 [2024-12-05 09:53:51.886321] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:04.398 [2024-12-05 09:53:51.886331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:04.398 [2024-12-05 09:53:51.886339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:04.398 [2024-12-05 09:53:51.886350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:04.398 [2024-12-05 09:53:51.886358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:04.398 [2024-12-05 09:53:51.886367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:04.398 [2024-12-05 09:53:51.886376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:04.398 [2024-12-05 09:53:51.886383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:04.398 [2024-12-05 09:53:51.886390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:04.398 [2024-12-05 09:53:51.886397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:04.398 [2024-12-05 09:53:51.886406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:04.398 [2024-12-05 09:53:51.886416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:04.398 [2024-12-05 09:53:51.886429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:04.398 [2024-12-05 09:53:51.886437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:04.398 [2024-12-05 09:53:51.886445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:04.398 [2024-12-05 09:53:51.886455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:04.398 [2024-12-05 09:53:51.886463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:04.398 [2024-12-05 09:53:51.886472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:04.398 [2024-12-05 09:53:51.886482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:04.398 [2024-12-05 09:53:51.886490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:04.398 [2024-12-05 09:53:51.886499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:04.398 [2024-12-05 09:53:51.886520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:04.398 [2024-12-05 09:53:51.886529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:04.398 [2024-12-05 09:53:51.886536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:04.398 [2024-12-05 09:53:51.886544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:04.398 [2024-12-05 09:53:51.886553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:04.398 [2024-12-05 09:53:51.886562] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:04.398 [2024-12-05 09:53:51.886572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:04.398 [2024-12-05 09:53:51.886582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:04.398 [2024-12-05 09:53:51.886590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:04.398 [2024-12-05 09:53:51.886598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:04.398 [2024-12-05 09:53:51.886605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:04.398 [2024-12-05 09:53:51.886613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.886622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:04.398 [2024-12-05 09:53:51.886630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:22:04.398 [2024-12-05 09:53:51.886638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.398 [2024-12-05 09:53:51.918107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.918155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.398 [2024-12-05 09:53:51.918167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.423 ms 00:22:04.398 [2024-12-05 09:53:51.918179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.398 [2024-12-05 09:53:51.918268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.918278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:04.398 [2024-12-05 09:53:51.918287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:04.398 [2024-12-05 09:53:51.918295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.398 [2024-12-05 09:53:51.967950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.968007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.398 [2024-12-05 09:53:51.968021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.594 ms 00:22:04.398 [2024-12-05 09:53:51.968031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.398 [2024-12-05 09:53:51.968079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.968090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.398 [2024-12-05 09:53:51.968104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:04.398 [2024-12-05 09:53:51.968113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.398 [2024-12-05 09:53:51.968736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.968768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.398 [2024-12-05 09:53:51.968779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:22:04.398 [2024-12-05 09:53:51.968788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.398 [2024-12-05 09:53:51.968941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.968954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.398 [2024-12-05 09:53:51.968969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:22:04.398 [2024-12-05 09:53:51.968977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.398 [2024-12-05 09:53:51.984541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.984583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.398 [2024-12-05 09:53:51.984595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.543 ms 00:22:04.398 [2024-12-05 09:53:51.984603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.398 [2024-12-05 09:53:51.998972] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:04.398 [2024-12-05 09:53:51.999016] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:04.398 [2024-12-05 09:53:51.999030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.398 [2024-12-05 09:53:51.999039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:04.398 [2024-12-05 09:53:51.999049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.319 ms 00:22:04.398 [2024-12-05 09:53:51.999057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.024436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.024484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:04.661 [2024-12-05 09:53:52.024496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.324 ms 00:22:04.661 [2024-12-05 09:53:52.024505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.037364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.037411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:04.661 [2024-12-05 09:53:52.037422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.779 ms 00:22:04.661 [2024-12-05 09:53:52.037430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.049726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.049774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:04.661 [2024-12-05 09:53:52.049786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.247 ms 00:22:04.661 [2024-12-05 09:53:52.049793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.050441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.050473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:04.661 [2024-12-05 09:53:52.050486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:22:04.661 [2024-12-05 09:53:52.050494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.116013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.116076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:04.661 [2024-12-05 09:53:52.116096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.483 ms 00:22:04.661 [2024-12-05 09:53:52.116105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.127712] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:04.661 [2024-12-05 09:53:52.130743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.130784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:04.661 [2024-12-05 09:53:52.130796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.582 ms 00:22:04.661 [2024-12-05 09:53:52.130805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.130887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.130899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:04.661 [2024-12-05 09:53:52.130912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:04.661 [2024-12-05 09:53:52.130921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.130992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.131004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:04.661 [2024-12-05 09:53:52.131015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:04.661 [2024-12-05 09:53:52.131024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.131046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.131055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:04.661 [2024-12-05 09:53:52.131064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:04.661 [2024-12-05 09:53:52.131072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.131111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:04.661 [2024-12-05 09:53:52.131122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.131131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:04.661 [2024-12-05 09:53:52.131140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:04.661 [2024-12-05 09:53:52.131149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.156869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.156917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:04.661 [2024-12-05 09:53:52.156935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.701 ms 00:22:04.661 [2024-12-05 09:53:52.156944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.157026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.661 [2024-12-05 09:53:52.157036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:04.661 [2024-12-05 09:53:52.157045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:04.661 [2024-12-05 09:53:52.157053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.661 [2024-12-05 09:53:52.158282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.948 ms, result 0 00:22:06.049  [2024-12-05T09:53:54.621Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-05T09:53:55.564Z] Copying: 36/1024 [MB] (20 MBps) [2024-12-05T09:53:56.534Z] Copying: 56/1024 [MB] (19 MBps) [2024-12-05T09:53:57.478Z] Copying: 77/1024 [MB] (21 MBps) [2024-12-05T09:53:58.424Z] Copying: 91/1024 [MB] (13 MBps) [2024-12-05T09:53:59.367Z] Copying: 102/1024 [MB] (10 MBps) [2024-12-05T09:54:00.754Z] Copying: 113/1024 [MB] (11 MBps) [2024-12-05T09:54:01.765Z] Copying: 123/1024 [MB] (10 MBps) [2024-12-05T09:54:02.711Z] Copying: 134/1024 [MB] (10 MBps) [2024-12-05T09:54:03.656Z] Copying: 145/1024 [MB] (10 MBps) [2024-12-05T09:54:04.600Z] Copying: 156/1024 [MB] (10 MBps) [2024-12-05T09:54:05.543Z] Copying: 167/1024 [MB] (11 MBps) [2024-12-05T09:54:06.482Z] Copying: 177/1024 [MB] (10 MBps) [2024-12-05T09:54:07.422Z] Copying: 188/1024 [MB] (10 MBps) [2024-12-05T09:54:08.368Z] Copying: 199/1024 [MB] (10 MBps) [2024-12-05T09:54:09.757Z] Copying: 210/1024 [MB] (10 MBps) [2024-12-05T09:54:10.702Z] Copying: 221/1024 [MB] (10 MBps) [2024-12-05T09:54:11.644Z] Copying: 235/1024 [MB] (13 MBps) [2024-12-05T09:54:12.586Z] Copying: 252/1024 [MB] (17 MBps) [2024-12-05T09:54:13.529Z] Copying: 267/1024 [MB] (14 MBps) [2024-12-05T09:54:14.472Z] Copying: 296/1024 [MB] (29 MBps) [2024-12-05T09:54:15.412Z] Copying: 309/1024 [MB] (12 MBps) [2024-12-05T09:54:16.352Z] Copying: 320/1024 [MB] (10 MBps) [2024-12-05T09:54:17.736Z] Copying: 341/1024 [MB] (21 MBps) [2024-12-05T09:54:18.681Z] Copying: 352/1024 [MB] (11 MBps) [2024-12-05T09:54:19.624Z] Copying: 363/1024 [MB] (11 MBps) [2024-12-05T09:54:20.568Z] Copying: 374/1024 [MB] (10 MBps) [2024-12-05T09:54:21.511Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-05T09:54:22.451Z] Copying: 395/1024 [MB] (10 MBps) [2024-12-05T09:54:23.391Z] Copying: 413/1024 [MB] (17 MBps) [2024-12-05T09:54:24.776Z] Copying: 426/1024 [MB] (12 MBps) [2024-12-05T09:54:25.347Z] Copying: 436/1024 [MB] (10 MBps) [2024-12-05T09:54:26.731Z] Copying: 451/1024 [MB] (14 MBps) [2024-12-05T09:54:27.675Z] Copying: 464/1024 [MB] (13 MBps) [2024-12-05T09:54:28.618Z] Copying: 478/1024 [MB] (13 MBps) [2024-12-05T09:54:29.562Z] Copying: 498/1024 [MB] (20 MBps) [2024-12-05T09:54:30.507Z] Copying: 523/1024 [MB] (24 MBps) [2024-12-05T09:54:31.451Z] Copying: 544/1024 [MB] (21 MBps) [2024-12-05T09:54:32.422Z] Copying: 563/1024 [MB] (18 MBps) [2024-12-05T09:54:33.392Z] Copying: 578/1024 [MB] (15 MBps) [2024-12-05T09:54:34.377Z] Copying: 603/1024 [MB] (24 MBps) [2024-12-05T09:54:35.760Z] Copying: 623/1024 [MB] (20 MBps) [2024-12-05T09:54:36.702Z] Copying: 640/1024 [MB] (17 MBps) [2024-12-05T09:54:37.646Z] Copying: 653/1024 [MB] (13 MBps) [2024-12-05T09:54:38.619Z] Copying: 670/1024 [MB] (17 MBps) [2024-12-05T09:54:39.563Z] Copying: 682/1024 [MB] (11 MBps) [2024-12-05T09:54:40.508Z] Copying: 693/1024 [MB] (11 MBps) [2024-12-05T09:54:41.451Z] Copying: 706/1024 [MB] (12 MBps) [2024-12-05T09:54:42.394Z] Copying: 722/1024 [MB] (15 MBps) [2024-12-05T09:54:43.784Z] Copying: 745/1024 [MB] (23 MBps) [2024-12-05T09:54:44.357Z] Copying: 759/1024 [MB] (14 MBps) [2024-12-05T09:54:45.741Z] Copying: 776/1024 [MB] (17 MBps) [2024-12-05T09:54:46.683Z] Copying: 792/1024 [MB] (15 MBps) [2024-12-05T09:54:47.641Z] Copying: 813/1024 [MB] (21 MBps) [2024-12-05T09:54:48.590Z] Copying: 829/1024 [MB] (15 MBps) [2024-12-05T09:54:49.535Z] Copying: 850/1024 [MB] (20 MBps) [2024-12-05T09:54:50.477Z] Copying: 869/1024 [MB] (19 MBps) [2024-12-05T09:54:51.421Z] Copying: 881/1024 [MB] (11 MBps) [2024-12-05T09:54:52.363Z] Copying: 892/1024 [MB] (11 MBps) [2024-12-05T09:54:53.751Z] Copying: 903/1024 [MB] (10 MBps) [2024-12-05T09:54:54.693Z] Copying: 915/1024 [MB] (12 MBps) [2024-12-05T09:54:55.645Z] Copying: 932/1024 [MB] (16 MBps) [2024-12-05T09:54:56.590Z] Copying: 952/1024 [MB] (20 MBps) [2024-12-05T09:54:57.534Z] Copying: 964/1024 [MB] (11 MBps) [2024-12-05T09:54:58.476Z] Copying: 981/1024 [MB] (17 MBps) [2024-12-05T09:54:59.418Z] Copying: 1002/1024 [MB] (20 MBps) [2024-12-05T09:54:59.679Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-05 09:54:59.674193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.050 [2024-12-05 09:54:59.674283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:12.050 [2024-12-05 09:54:59.674298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:12.050 [2024-12-05 09:54:59.674308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.050 [2024-12-05 09:54:59.674332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:12.312 [2024-12-05 09:54:59.677618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.677668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:12.312 [2024-12-05 09:54:59.677679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:23:12.312 [2024-12-05 09:54:59.677688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.677921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.677933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:12.312 [2024-12-05 09:54:59.677942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:23:12.312 [2024-12-05 09:54:59.677950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.681885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.681911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:12.312 [2024-12-05 09:54:59.681922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:23:12.312 [2024-12-05 09:54:59.681936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.688158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.688203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:12.312 [2024-12-05 09:54:59.688214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:23:12.312 [2024-12-05 09:54:59.688223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.716568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.716620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:12.312 [2024-12-05 09:54:59.716634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.270 ms 00:23:12.312 [2024-12-05 09:54:59.716642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.733751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.733802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:12.312 [2024-12-05 09:54:59.733815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.058 ms 00:23:12.312 [2024-12-05 09:54:59.733824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.733988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.734001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:12.312 [2024-12-05 09:54:59.734011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:23:12.312 [2024-12-05 09:54:59.734020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.760033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.760081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:12.312 [2024-12-05 09:54:59.760093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.996 ms 00:23:12.312 [2024-12-05 09:54:59.760101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.785066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.785115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:12.312 [2024-12-05 09:54:59.785127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.918 ms 00:23:12.312 [2024-12-05 09:54:59.785134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.809318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.809366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:12.312 [2024-12-05 09:54:59.809377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.138 ms 00:23:12.312 [2024-12-05 09:54:59.809386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.834055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.312 [2024-12-05 09:54:59.834118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:12.312 [2024-12-05 09:54:59.834129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.595 ms 00:23:12.312 [2024-12-05 09:54:59.834137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.312 [2024-12-05 09:54:59.834177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:12.312 [2024-12-05 09:54:59.834200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:12.312 [2024-12-05 09:54:59.834309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.834993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.835000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.835008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.835016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:12.313 [2024-12-05 09:54:59.835032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:12.313 [2024-12-05 09:54:59.835041] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b77a6074-e442-45aa-b19f-66a48a40d8dc 00:23:12.313 [2024-12-05 09:54:59.835049] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:12.314 [2024-12-05 09:54:59.835058] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:12.314 [2024-12-05 09:54:59.835065] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:12.314 [2024-12-05 09:54:59.835074] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:12.314 [2024-12-05 09:54:59.835088] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:12.314 [2024-12-05 09:54:59.835096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:12.314 [2024-12-05 09:54:59.835104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:12.314 [2024-12-05 09:54:59.835111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:12.314 [2024-12-05 09:54:59.835118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:12.314 [2024-12-05 09:54:59.835126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.314 [2024-12-05 09:54:59.835135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:12.314 [2024-12-05 09:54:59.835143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:23:12.314 [2024-12-05 09:54:59.835155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.314 [2024-12-05 09:54:59.848791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.314 [2024-12-05 09:54:59.848834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:12.314 [2024-12-05 09:54:59.848846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.616 ms 00:23:12.314 [2024-12-05 09:54:59.848854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.314 [2024-12-05 09:54:59.849252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.314 [2024-12-05 09:54:59.849272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:12.314 [2024-12-05 09:54:59.849288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:23:12.314 [2024-12-05 09:54:59.849296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.314 [2024-12-05 09:54:59.885659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.314 [2024-12-05 09:54:59.885708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:12.314 [2024-12-05 09:54:59.885720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.314 [2024-12-05 09:54:59.885729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.314 [2024-12-05 09:54:59.885796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.314 [2024-12-05 09:54:59.885807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:12.314 [2024-12-05 09:54:59.885822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.314 [2024-12-05 09:54:59.885831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.314 [2024-12-05 09:54:59.885914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.314 [2024-12-05 09:54:59.885927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:12.314 [2024-12-05 09:54:59.885937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.314 [2024-12-05 09:54:59.885946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.314 [2024-12-05 09:54:59.885964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.314 [2024-12-05 09:54:59.885973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:12.314 [2024-12-05 09:54:59.885982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.314 [2024-12-05 09:54:59.885995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:54:59.970528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.575 [2024-12-05 09:54:59.970591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:12.575 [2024-12-05 09:54:59.970604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.575 [2024-12-05 09:54:59.970613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:55:00.039894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.575 [2024-12-05 09:55:00.039976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:12.575 [2024-12-05 09:55:00.039996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.575 [2024-12-05 09:55:00.040005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:55:00.040066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.575 [2024-12-05 09:55:00.040077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:12.575 [2024-12-05 09:55:00.040086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.575 [2024-12-05 09:55:00.040095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:55:00.040152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.575 [2024-12-05 09:55:00.040162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:12.575 [2024-12-05 09:55:00.040171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.575 [2024-12-05 09:55:00.040179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:55:00.040278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.575 [2024-12-05 09:55:00.040289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:12.575 [2024-12-05 09:55:00.040297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.575 [2024-12-05 09:55:00.040306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:55:00.040341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.575 [2024-12-05 09:55:00.040351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:12.575 [2024-12-05 09:55:00.040360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.575 [2024-12-05 09:55:00.040368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:55:00.040413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.575 [2024-12-05 09:55:00.040422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:12.575 [2024-12-05 09:55:00.040431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.575 [2024-12-05 09:55:00.040439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:55:00.040485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.575 [2024-12-05 09:55:00.040495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:12.575 [2024-12-05 09:55:00.040503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.575 [2024-12-05 09:55:00.040538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.575 [2024-12-05 09:55:00.040670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.444 ms, result 0 00:23:13.517 00:23:13.517 00:23:13.517 09:55:00 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:15.428 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:15.428 09:55:03 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:15.689 [2024-12-05 09:55:03.105852] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:15.689 [2024-12-05 09:55:03.106004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78800 ] 00:23:15.689 [2024-12-05 09:55:03.270861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.952 [2024-12-05 09:55:03.387833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.215 [2024-12-05 09:55:03.681839] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:16.215 [2024-12-05 09:55:03.681929] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:16.215 [2024-12-05 09:55:03.842265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.215 [2024-12-05 09:55:03.842338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:16.215 [2024-12-05 09:55:03.842354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:16.215 [2024-12-05 09:55:03.842363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.215 [2024-12-05 09:55:03.842418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.215 [2024-12-05 09:55:03.842433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.215 [2024-12-05 09:55:03.842442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:16.215 [2024-12-05 09:55:03.842449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.215 [2024-12-05 09:55:03.842471] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:16.478 [2024-12-05 09:55:03.843347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:16.478 [2024-12-05 09:55:03.843391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.843401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.478 [2024-12-05 09:55:03.843411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:23:16.478 [2024-12-05 09:55:03.843419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.845181] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:16.478 [2024-12-05 09:55:03.859340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.859392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:16.478 [2024-12-05 09:55:03.859406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.161 ms 00:23:16.478 [2024-12-05 09:55:03.859413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.859496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.859506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:16.478 [2024-12-05 09:55:03.859529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:16.478 [2024-12-05 09:55:03.859538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.867451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.867498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.478 [2024-12-05 09:55:03.867523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.834 ms 00:23:16.478 [2024-12-05 09:55:03.867537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.867618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.867627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.478 [2024-12-05 09:55:03.867637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:16.478 [2024-12-05 09:55:03.867645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.867689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.867699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:16.478 [2024-12-05 09:55:03.867708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:16.478 [2024-12-05 09:55:03.867716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.867742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:16.478 [2024-12-05 09:55:03.871788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.871826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.478 [2024-12-05 09:55:03.871840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.052 ms 00:23:16.478 [2024-12-05 09:55:03.871847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.871884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.871893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:16.478 [2024-12-05 09:55:03.871901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:16.478 [2024-12-05 09:55:03.871909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.871972] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:16.478 [2024-12-05 09:55:03.871997] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:16.478 [2024-12-05 09:55:03.872034] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:16.478 [2024-12-05 09:55:03.872055] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:16.478 [2024-12-05 09:55:03.872161] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:16.478 [2024-12-05 09:55:03.872172] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:16.478 [2024-12-05 09:55:03.872183] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:16.478 [2024-12-05 09:55:03.872193] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872212] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:16.478 [2024-12-05 09:55:03.872220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:16.478 [2024-12-05 09:55:03.872231] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:16.478 [2024-12-05 09:55:03.872240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:16.478 [2024-12-05 09:55:03.872248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.872256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:16.478 [2024-12-05 09:55:03.872264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:23:16.478 [2024-12-05 09:55:03.872272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.872358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.478 [2024-12-05 09:55:03.872367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:16.478 [2024-12-05 09:55:03.872375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:16.478 [2024-12-05 09:55:03.872382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.478 [2024-12-05 09:55:03.872488] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:16.478 [2024-12-05 09:55:03.872499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:16.478 [2024-12-05 09:55:03.872523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:16.478 [2024-12-05 09:55:03.872548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:16.478 [2024-12-05 09:55:03.872570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:16.478 [2024-12-05 09:55:03.872585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:16.478 [2024-12-05 09:55:03.872592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:16.478 [2024-12-05 09:55:03.872603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:16.478 [2024-12-05 09:55:03.872617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:16.478 [2024-12-05 09:55:03.872624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:16.478 [2024-12-05 09:55:03.872631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:16.478 [2024-12-05 09:55:03.872645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:16.478 [2024-12-05 09:55:03.872667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:16.478 [2024-12-05 09:55:03.872688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:16.478 [2024-12-05 09:55:03.872709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:16.478 [2024-12-05 09:55:03.872730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:16.478 [2024-12-05 09:55:03.872750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:16.478 [2024-12-05 09:55:03.872765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:16.478 [2024-12-05 09:55:03.872771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:16.478 [2024-12-05 09:55:03.872778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:16.478 [2024-12-05 09:55:03.872785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:16.478 [2024-12-05 09:55:03.872792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:16.478 [2024-12-05 09:55:03.872798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:16.478 [2024-12-05 09:55:03.872811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:16.478 [2024-12-05 09:55:03.872819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:16.478 [2024-12-05 09:55:03.872836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:16.478 [2024-12-05 09:55:03.872844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.478 [2024-12-05 09:55:03.872860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:16.478 [2024-12-05 09:55:03.872867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:16.478 [2024-12-05 09:55:03.872874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:16.478 [2024-12-05 09:55:03.872881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:16.478 [2024-12-05 09:55:03.872887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:16.478 [2024-12-05 09:55:03.872894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:16.478 [2024-12-05 09:55:03.872903] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:16.478 [2024-12-05 09:55:03.872912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.479 [2024-12-05 09:55:03.872924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:16.479 [2024-12-05 09:55:03.872932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:16.479 [2024-12-05 09:55:03.872939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:16.479 [2024-12-05 09:55:03.872946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:16.479 [2024-12-05 09:55:03.872953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:16.479 [2024-12-05 09:55:03.872960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:16.479 [2024-12-05 09:55:03.872967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:16.479 [2024-12-05 09:55:03.872974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:16.479 [2024-12-05 09:55:03.872981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:16.479 [2024-12-05 09:55:03.872988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:16.479 [2024-12-05 09:55:03.872996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:16.479 [2024-12-05 09:55:03.873003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:16.479 [2024-12-05 09:55:03.873010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:16.479 [2024-12-05 09:55:03.873017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:16.479 [2024-12-05 09:55:03.873025] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:16.479 [2024-12-05 09:55:03.873033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.479 [2024-12-05 09:55:03.873041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:16.479 [2024-12-05 09:55:03.873049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:16.479 [2024-12-05 09:55:03.873056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:16.479 [2024-12-05 09:55:03.873063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:16.479 [2024-12-05 09:55:03.873071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.873081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:16.479 [2024-12-05 09:55:03.873090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:23:16.479 [2024-12-05 09:55:03.873097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:03.905702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.905754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.479 [2024-12-05 09:55:03.905766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.559 ms 00:23:16.479 [2024-12-05 09:55:03.905779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:03.905873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.905882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:16.479 [2024-12-05 09:55:03.905891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:16.479 [2024-12-05 09:55:03.905899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:03.955669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.955725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.479 [2024-12-05 09:55:03.955738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.709 ms 00:23:16.479 [2024-12-05 09:55:03.955747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:03.955796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.955807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.479 [2024-12-05 09:55:03.955820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:16.479 [2024-12-05 09:55:03.955829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:03.956447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.956485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.479 [2024-12-05 09:55:03.956497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:23:16.479 [2024-12-05 09:55:03.956506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:03.956682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.956694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.479 [2024-12-05 09:55:03.956709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:23:16.479 [2024-12-05 09:55:03.956717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:03.972151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.972198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.479 [2024-12-05 09:55:03.972209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.414 ms 00:23:16.479 [2024-12-05 09:55:03.972217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:03.986296] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:16.479 [2024-12-05 09:55:03.986344] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:16.479 [2024-12-05 09:55:03.986357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:03.986366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:16.479 [2024-12-05 09:55:03.986376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.033 ms 00:23:16.479 [2024-12-05 09:55:03.986383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:04.011997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:04.012048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:16.479 [2024-12-05 09:55:04.012061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.563 ms 00:23:16.479 [2024-12-05 09:55:04.012070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:04.024811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:04.024859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:16.479 [2024-12-05 09:55:04.024871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.679 ms 00:23:16.479 [2024-12-05 09:55:04.024879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:04.037583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:04.037629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:16.479 [2024-12-05 09:55:04.037641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.660 ms 00:23:16.479 [2024-12-05 09:55:04.037648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:04.038293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:04.038324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:16.479 [2024-12-05 09:55:04.038338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:23:16.479 [2024-12-05 09:55:04.038346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.479 [2024-12-05 09:55:04.103798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.479 [2024-12-05 09:55:04.103858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:16.479 [2024-12-05 09:55:04.103881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.431 ms 00:23:16.479 [2024-12-05 09:55:04.103891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.746 [2024-12-05 09:55:04.115074] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:16.746 [2024-12-05 09:55:04.118229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.746 [2024-12-05 09:55:04.118273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:16.746 [2024-12-05 09:55:04.118285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.260 ms 00:23:16.746 [2024-12-05 09:55:04.118294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.746 [2024-12-05 09:55:04.118381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.746 [2024-12-05 09:55:04.118393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:16.746 [2024-12-05 09:55:04.118406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:16.746 [2024-12-05 09:55:04.118415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.746 [2024-12-05 09:55:04.118487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.746 [2024-12-05 09:55:04.118498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:16.746 [2024-12-05 09:55:04.118524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:16.746 [2024-12-05 09:55:04.118533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.746 [2024-12-05 09:55:04.118554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.746 [2024-12-05 09:55:04.118563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:16.746 [2024-12-05 09:55:04.118572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:16.746 [2024-12-05 09:55:04.118580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.746 [2024-12-05 09:55:04.118620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:16.746 [2024-12-05 09:55:04.118632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.746 [2024-12-05 09:55:04.118640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:16.746 [2024-12-05 09:55:04.118648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:16.746 [2024-12-05 09:55:04.118656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.746 [2024-12-05 09:55:04.144552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.746 [2024-12-05 09:55:04.144603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:16.746 [2024-12-05 09:55:04.144622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:23:16.746 [2024-12-05 09:55:04.144631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.746 [2024-12-05 09:55:04.144714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.746 [2024-12-05 09:55:04.144725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:16.746 [2024-12-05 09:55:04.144734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:16.746 [2024-12-05 09:55:04.144743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.746 [2024-12-05 09:55:04.146030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.193 ms, result 0 00:23:17.774  [2024-12-05T09:55:06.346Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-05T09:55:07.291Z] Copying: 38/1024 [MB] (19 MBps) [2024-12-05T09:55:08.237Z] Copying: 56/1024 [MB] (18 MBps) [2024-12-05T09:55:09.181Z] Copying: 67/1024 [MB] (10 MBps) [2024-12-05T09:55:10.566Z] Copying: 77/1024 [MB] (10 MBps) [2024-12-05T09:55:11.511Z] Copying: 99/1024 [MB] (21 MBps) [2024-12-05T09:55:12.453Z] Copying: 112120/1048576 [kB] (10120 kBps) [2024-12-05T09:55:13.398Z] Copying: 122/1024 [MB] (13 MBps) [2024-12-05T09:55:14.342Z] Copying: 137/1024 [MB] (15 MBps) [2024-12-05T09:55:15.285Z] Copying: 152/1024 [MB] (14 MBps) [2024-12-05T09:55:16.226Z] Copying: 170/1024 [MB] (18 MBps) [2024-12-05T09:55:17.170Z] Copying: 183/1024 [MB] (12 MBps) [2024-12-05T09:55:18.556Z] Copying: 199/1024 [MB] (16 MBps) [2024-12-05T09:55:19.500Z] Copying: 217/1024 [MB] (17 MBps) [2024-12-05T09:55:20.444Z] Copying: 231/1024 [MB] (14 MBps) [2024-12-05T09:55:21.388Z] Copying: 249/1024 [MB] (17 MBps) [2024-12-05T09:55:22.330Z] Copying: 262/1024 [MB] (13 MBps) [2024-12-05T09:55:23.273Z] Copying: 272/1024 [MB] (10 MBps) [2024-12-05T09:55:24.215Z] Copying: 290/1024 [MB] (17 MBps) [2024-12-05T09:55:25.599Z] Copying: 308/1024 [MB] (18 MBps) [2024-12-05T09:55:26.168Z] Copying: 346/1024 [MB] (37 MBps) [2024-12-05T09:55:27.551Z] Copying: 382/1024 [MB] (35 MBps) [2024-12-05T09:55:28.492Z] Copying: 401/1024 [MB] (19 MBps) [2024-12-05T09:55:29.435Z] Copying: 419/1024 [MB] (17 MBps) [2024-12-05T09:55:30.375Z] Copying: 440/1024 [MB] (21 MBps) [2024-12-05T09:55:31.313Z] Copying: 458/1024 [MB] (17 MBps) [2024-12-05T09:55:32.250Z] Copying: 475/1024 [MB] (17 MBps) [2024-12-05T09:55:33.191Z] Copying: 486/1024 [MB] (11 MBps) [2024-12-05T09:55:34.575Z] Copying: 500/1024 [MB] (13 MBps) [2024-12-05T09:55:35.517Z] Copying: 511/1024 [MB] (10 MBps) [2024-12-05T09:55:36.495Z] Copying: 521/1024 [MB] (10 MBps) [2024-12-05T09:55:37.483Z] Copying: 531/1024 [MB] (10 MBps) [2024-12-05T09:55:38.427Z] Copying: 541/1024 [MB] (10 MBps) [2024-12-05T09:55:39.401Z] Copying: 551/1024 [MB] (10 MBps) [2024-12-05T09:55:40.344Z] Copying: 562/1024 [MB] (10 MBps) [2024-12-05T09:55:41.285Z] Copying: 583/1024 [MB] (21 MBps) [2024-12-05T09:55:42.224Z] Copying: 607/1024 [MB] (23 MBps) [2024-12-05T09:55:43.168Z] Copying: 631/1024 [MB] (24 MBps) [2024-12-05T09:55:44.557Z] Copying: 655/1024 [MB] (23 MBps) [2024-12-05T09:55:45.502Z] Copying: 677/1024 [MB] (22 MBps) [2024-12-05T09:55:46.446Z] Copying: 699/1024 [MB] (21 MBps) [2024-12-05T09:55:47.392Z] Copying: 720/1024 [MB] (21 MBps) [2024-12-05T09:55:48.337Z] Copying: 742/1024 [MB] (21 MBps) [2024-12-05T09:55:49.281Z] Copying: 763/1024 [MB] (21 MBps) [2024-12-05T09:55:50.225Z] Copying: 780/1024 [MB] (16 MBps) [2024-12-05T09:55:51.169Z] Copying: 799/1024 [MB] (19 MBps) [2024-12-05T09:55:52.555Z] Copying: 813/1024 [MB] (13 MBps) [2024-12-05T09:55:53.498Z] Copying: 830/1024 [MB] (16 MBps) [2024-12-05T09:55:54.441Z] Copying: 861/1024 [MB] (31 MBps) [2024-12-05T09:55:55.385Z] Copying: 892/1024 [MB] (30 MBps) [2024-12-05T09:55:56.325Z] Copying: 909/1024 [MB] (17 MBps) [2024-12-05T09:55:57.264Z] Copying: 930/1024 [MB] (21 MBps) [2024-12-05T09:55:58.206Z] Copying: 946/1024 [MB] (16 MBps) [2024-12-05T09:55:59.595Z] Copying: 962/1024 [MB] (15 MBps) [2024-12-05T09:56:00.168Z] Copying: 983/1024 [MB] (21 MBps) [2024-12-05T09:56:01.554Z] Copying: 1004/1024 [MB] (20 MBps) [2024-12-05T09:56:01.554Z] Copying: 1020/1024 [MB] (16 MBps) [2024-12-05T09:56:01.554Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-05 09:56:01.461186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.925 [2024-12-05 09:56:01.461252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:13.925 [2024-12-05 09:56:01.461268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:13.925 [2024-12-05 09:56:01.461277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.925 [2024-12-05 09:56:01.461301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:13.925 [2024-12-05 09:56:01.464340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.926 [2024-12-05 09:56:01.464567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:13.926 [2024-12-05 09:56:01.464591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:24:13.926 [2024-12-05 09:56:01.464600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.926 [2024-12-05 09:56:01.467852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.926 [2024-12-05 09:56:01.468047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:13.926 [2024-12-05 09:56:01.468068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:24:13.926 [2024-12-05 09:56:01.468077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.926 [2024-12-05 09:56:01.487886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.926 [2024-12-05 09:56:01.487939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:13.926 [2024-12-05 09:56:01.487951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.787 ms 00:24:13.926 [2024-12-05 09:56:01.487983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.926 [2024-12-05 09:56:01.494303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.926 [2024-12-05 09:56:01.494464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:13.926 [2024-12-05 09:56:01.494560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.286 ms 00:24:13.926 [2024-12-05 09:56:01.494586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.926 [2024-12-05 09:56:01.521064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.926 [2024-12-05 09:56:01.521252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:13.926 [2024-12-05 09:56:01.521274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.415 ms 00:24:13.926 [2024-12-05 09:56:01.521283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.926 [2024-12-05 09:56:01.538573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.926 [2024-12-05 09:56:01.538811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:13.926 [2024-12-05 09:56:01.538837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.763 ms 00:24:13.926 [2024-12-05 09:56:01.538847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.926 [2024-12-05 09:56:01.540661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.926 [2024-12-05 09:56:01.540707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:13.926 [2024-12-05 09:56:01.540718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:24:13.926 [2024-12-05 09:56:01.540726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.188 [2024-12-05 09:56:01.567008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.188 [2024-12-05 09:56:01.567055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:14.188 [2024-12-05 09:56:01.567067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.266 ms 00:24:14.188 [2024-12-05 09:56:01.567075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.188 [2024-12-05 09:56:01.593244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.188 [2024-12-05 09:56:01.593289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:14.188 [2024-12-05 09:56:01.593302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.119 ms 00:24:14.188 [2024-12-05 09:56:01.593310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.188 [2024-12-05 09:56:01.617829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.188 [2024-12-05 09:56:01.617876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:14.188 [2024-12-05 09:56:01.617889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.470 ms 00:24:14.188 [2024-12-05 09:56:01.617896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.188 [2024-12-05 09:56:01.642856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.188 [2024-12-05 09:56:01.642901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:14.188 [2024-12-05 09:56:01.642913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.883 ms 00:24:14.188 [2024-12-05 09:56:01.642920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.188 [2024-12-05 09:56:01.642967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:14.188 [2024-12-05 09:56:01.642990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 512 / 261120 wr_cnt: 1 state: open 00:24:14.188 [2024-12-05 09:56:01.643004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:14.188 [2024-12-05 09:56:01.643578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:14.189 [2024-12-05 09:56:01.643842] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:14.189 [2024-12-05 09:56:01.643852] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b77a6074-e442-45aa-b19f-66a48a40d8dc 00:24:14.189 [2024-12-05 09:56:01.643860] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 512 00:24:14.189 [2024-12-05 09:56:01.643868] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1472 00:24:14.189 [2024-12-05 09:56:01.643875] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 512 00:24:14.189 [2024-12-05 09:56:01.643884] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.8750 00:24:14.189 [2024-12-05 09:56:01.643898] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:14.189 [2024-12-05 09:56:01.643905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:14.189 [2024-12-05 09:56:01.643914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:14.189 [2024-12-05 09:56:01.643922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:14.189 [2024-12-05 09:56:01.643928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:14.189 [2024-12-05 09:56:01.643935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.189 [2024-12-05 09:56:01.643944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:14.189 [2024-12-05 09:56:01.643953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:24:14.189 [2024-12-05 09:56:01.643978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.189 [2024-12-05 09:56:01.657601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.189 [2024-12-05 09:56:01.657642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:14.189 [2024-12-05 09:56:01.657653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.603 ms 00:24:14.189 [2024-12-05 09:56:01.657661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.189 [2024-12-05 09:56:01.658059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.189 [2024-12-05 09:56:01.658079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:14.189 [2024-12-05 09:56:01.658089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:24:14.189 [2024-12-05 09:56:01.658097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.189 [2024-12-05 09:56:01.694644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.189 [2024-12-05 09:56:01.694698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.189 [2024-12-05 09:56:01.694711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.189 [2024-12-05 09:56:01.694722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.189 [2024-12-05 09:56:01.694787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.189 [2024-12-05 09:56:01.694801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.189 [2024-12-05 09:56:01.694812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.189 [2024-12-05 09:56:01.694820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.189 [2024-12-05 09:56:01.694919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.189 [2024-12-05 09:56:01.694935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.189 [2024-12-05 09:56:01.694946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.189 [2024-12-05 09:56:01.694956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.189 [2024-12-05 09:56:01.694973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.189 [2024-12-05 09:56:01.694982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.189 [2024-12-05 09:56:01.694994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.189 [2024-12-05 09:56:01.695002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.189 [2024-12-05 09:56:01.780454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.189 [2024-12-05 09:56:01.780539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.189 [2024-12-05 09:56:01.780554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.189 [2024-12-05 09:56:01.780563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.451 [2024-12-05 09:56:01.850855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.451 [2024-12-05 09:56:01.850915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.451 [2024-12-05 09:56:01.850928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.451 [2024-12-05 09:56:01.850937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.451 [2024-12-05 09:56:01.850993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.451 [2024-12-05 09:56:01.851004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.451 [2024-12-05 09:56:01.851014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.451 [2024-12-05 09:56:01.851023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.451 [2024-12-05 09:56:01.851088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.451 [2024-12-05 09:56:01.851101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.451 [2024-12-05 09:56:01.851110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.451 [2024-12-05 09:56:01.851122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.451 [2024-12-05 09:56:01.851219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.451 [2024-12-05 09:56:01.851230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.451 [2024-12-05 09:56:01.851239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.451 [2024-12-05 09:56:01.851248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.451 [2024-12-05 09:56:01.851284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.451 [2024-12-05 09:56:01.851297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:14.451 [2024-12-05 09:56:01.851306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.451 [2024-12-05 09:56:01.851314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.451 [2024-12-05 09:56:01.851367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.451 [2024-12-05 09:56:01.851376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.451 [2024-12-05 09:56:01.851387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.451 [2024-12-05 09:56:01.851395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.451 [2024-12-05 09:56:01.851448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.451 [2024-12-05 09:56:01.851460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.451 [2024-12-05 09:56:01.851469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.451 [2024-12-05 09:56:01.851480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.451 [2024-12-05 09:56:01.851642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.418 ms, result 0 00:24:15.390 00:24:15.390 00:24:15.390 09:56:02 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:15.390 [2024-12-05 09:56:02.813625] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:15.390 [2024-12-05 09:56:02.814031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79409 ] 00:24:15.390 [2024-12-05 09:56:02.979994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.648 [2024-12-05 09:56:03.096655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.907 [2024-12-05 09:56:03.394247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:15.907 [2024-12-05 09:56:03.394331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.168 [2024-12-05 09:56:03.555402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.555471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:16.168 [2024-12-05 09:56:03.555487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:16.168 [2024-12-05 09:56:03.555497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.555575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.555589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.168 [2024-12-05 09:56:03.555600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:16.168 [2024-12-05 09:56:03.555609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.555631] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:16.168 [2024-12-05 09:56:03.556350] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:16.168 [2024-12-05 09:56:03.556379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.556388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.168 [2024-12-05 09:56:03.556397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:24:16.168 [2024-12-05 09:56:03.556405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.558165] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:16.168 [2024-12-05 09:56:03.572478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.572707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:16.168 [2024-12-05 09:56:03.572731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.314 ms 00:24:16.168 [2024-12-05 09:56:03.572740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.572820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.572832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:16.168 [2024-12-05 09:56:03.572841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:16.168 [2024-12-05 09:56:03.572850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.581136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.581181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.168 [2024-12-05 09:56:03.581193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.206 ms 00:24:16.168 [2024-12-05 09:56:03.581208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.581291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.581301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.168 [2024-12-05 09:56:03.581310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:16.168 [2024-12-05 09:56:03.581318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.581363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.581373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:16.168 [2024-12-05 09:56:03.581382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:16.168 [2024-12-05 09:56:03.581389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.581416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:16.168 [2024-12-05 09:56:03.585615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.585669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.168 [2024-12-05 09:56:03.585683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.205 ms 00:24:16.168 [2024-12-05 09:56:03.585692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.168 [2024-12-05 09:56:03.585731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.168 [2024-12-05 09:56:03.585741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:16.169 [2024-12-05 09:56:03.585750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:16.169 [2024-12-05 09:56:03.585757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.169 [2024-12-05 09:56:03.585811] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:16.169 [2024-12-05 09:56:03.585836] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:16.169 [2024-12-05 09:56:03.585873] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:16.169 [2024-12-05 09:56:03.585892] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:16.169 [2024-12-05 09:56:03.586003] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:16.169 [2024-12-05 09:56:03.586014] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:16.169 [2024-12-05 09:56:03.586025] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:16.169 [2024-12-05 09:56:03.586036] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586045] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586054] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:16.169 [2024-12-05 09:56:03.586062] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:16.169 [2024-12-05 09:56:03.586074] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:16.169 [2024-12-05 09:56:03.586081] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:16.169 [2024-12-05 09:56:03.586089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.169 [2024-12-05 09:56:03.586097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:16.169 [2024-12-05 09:56:03.586106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:24:16.169 [2024-12-05 09:56:03.586114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.169 [2024-12-05 09:56:03.586197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.169 [2024-12-05 09:56:03.586206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:16.169 [2024-12-05 09:56:03.586215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:16.169 [2024-12-05 09:56:03.586223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.169 [2024-12-05 09:56:03.586341] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:16.169 [2024-12-05 09:56:03.586352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:16.169 [2024-12-05 09:56:03.586360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:16.169 [2024-12-05 09:56:03.586382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:16.169 [2024-12-05 09:56:03.586404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.169 [2024-12-05 09:56:03.586418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:16.169 [2024-12-05 09:56:03.586424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:16.169 [2024-12-05 09:56:03.586431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.169 [2024-12-05 09:56:03.586445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:16.169 [2024-12-05 09:56:03.586453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:16.169 [2024-12-05 09:56:03.586461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:16.169 [2024-12-05 09:56:03.586476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:16.169 [2024-12-05 09:56:03.586498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:16.169 [2024-12-05 09:56:03.586548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:16.169 [2024-12-05 09:56:03.586569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:16.169 [2024-12-05 09:56:03.586591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:16.169 [2024-12-05 09:56:03.586611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.169 [2024-12-05 09:56:03.586626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:16.169 [2024-12-05 09:56:03.586633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:16.169 [2024-12-05 09:56:03.586640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.169 [2024-12-05 09:56:03.586647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:16.169 [2024-12-05 09:56:03.586654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:16.169 [2024-12-05 09:56:03.586660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:16.169 [2024-12-05 09:56:03.586675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:16.169 [2024-12-05 09:56:03.586682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586689] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:16.169 [2024-12-05 09:56:03.586697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:16.169 [2024-12-05 09:56:03.586704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.169 [2024-12-05 09:56:03.586722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:16.169 [2024-12-05 09:56:03.586730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:16.169 [2024-12-05 09:56:03.586737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:16.169 [2024-12-05 09:56:03.586744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:16.169 [2024-12-05 09:56:03.586751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:16.169 [2024-12-05 09:56:03.586758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:16.169 [2024-12-05 09:56:03.586766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:16.169 [2024-12-05 09:56:03.586776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.169 [2024-12-05 09:56:03.586788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:16.169 [2024-12-05 09:56:03.586795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:16.169 [2024-12-05 09:56:03.586802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:16.169 [2024-12-05 09:56:03.586809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:16.169 [2024-12-05 09:56:03.586817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:16.169 [2024-12-05 09:56:03.586824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:16.169 [2024-12-05 09:56:03.586831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:16.169 [2024-12-05 09:56:03.586838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:16.169 [2024-12-05 09:56:03.586846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:16.169 [2024-12-05 09:56:03.586852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:16.169 [2024-12-05 09:56:03.586860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:16.169 [2024-12-05 09:56:03.586870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:16.169 [2024-12-05 09:56:03.586877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:16.169 [2024-12-05 09:56:03.586884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:16.169 [2024-12-05 09:56:03.586906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:16.169 [2024-12-05 09:56:03.586914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.169 [2024-12-05 09:56:03.586921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:16.169 [2024-12-05 09:56:03.586929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:16.169 [2024-12-05 09:56:03.586936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:16.170 [2024-12-05 09:56:03.586944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:16.170 [2024-12-05 09:56:03.586952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.586959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:16.170 [2024-12-05 09:56:03.586966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:24:16.170 [2024-12-05 09:56:03.586974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.619144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.619201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:16.170 [2024-12-05 09:56:03.619214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.113 ms 00:24:16.170 [2024-12-05 09:56:03.619227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.619320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.619329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:16.170 [2024-12-05 09:56:03.619339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:16.170 [2024-12-05 09:56:03.619347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.665065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.665122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.170 [2024-12-05 09:56:03.665136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.654 ms 00:24:16.170 [2024-12-05 09:56:03.665145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.665195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.665206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.170 [2024-12-05 09:56:03.665220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:16.170 [2024-12-05 09:56:03.665228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.665866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.665904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.170 [2024-12-05 09:56:03.665916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:24:16.170 [2024-12-05 09:56:03.665925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.666083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.666094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.170 [2024-12-05 09:56:03.666110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:24:16.170 [2024-12-05 09:56:03.666118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.681997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.682046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.170 [2024-12-05 09:56:03.682057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.856 ms 00:24:16.170 [2024-12-05 09:56:03.682066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.696444] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:24:16.170 [2024-12-05 09:56:03.696495] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:16.170 [2024-12-05 09:56:03.696523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.696533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:16.170 [2024-12-05 09:56:03.696543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.331 ms 00:24:16.170 [2024-12-05 09:56:03.696550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.722696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.722748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:16.170 [2024-12-05 09:56:03.722761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.090 ms 00:24:16.170 [2024-12-05 09:56:03.722769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.735704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.735750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:16.170 [2024-12-05 09:56:03.735763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.877 ms 00:24:16.170 [2024-12-05 09:56:03.735771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.748586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.748633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:16.170 [2024-12-05 09:56:03.748645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.766 ms 00:24:16.170 [2024-12-05 09:56:03.748653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.170 [2024-12-05 09:56:03.749299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.170 [2024-12-05 09:56:03.749324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:16.170 [2024-12-05 09:56:03.749338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:24:16.170 [2024-12-05 09:56:03.749346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.816692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.429 [2024-12-05 09:56:03.816757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:16.429 [2024-12-05 09:56:03.816781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.326 ms 00:24:16.429 [2024-12-05 09:56:03.816790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.828248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:16.429 [2024-12-05 09:56:03.831329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.429 [2024-12-05 09:56:03.831375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:16.429 [2024-12-05 09:56:03.831388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.481 ms 00:24:16.429 [2024-12-05 09:56:03.831397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.831481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.429 [2024-12-05 09:56:03.831493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:16.429 [2024-12-05 09:56:03.831506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:16.429 [2024-12-05 09:56:03.831540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.832380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.429 [2024-12-05 09:56:03.832599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:16.429 [2024-12-05 09:56:03.832622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:24:16.429 [2024-12-05 09:56:03.832633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.832676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.429 [2024-12-05 09:56:03.832688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:16.429 [2024-12-05 09:56:03.832699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:16.429 [2024-12-05 09:56:03.832708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.832752] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:16.429 [2024-12-05 09:56:03.832764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.429 [2024-12-05 09:56:03.832773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:16.429 [2024-12-05 09:56:03.832782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:16.429 [2024-12-05 09:56:03.832790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.858758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.429 [2024-12-05 09:56:03.858808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:16.429 [2024-12-05 09:56:03.858828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.949 ms 00:24:16.429 [2024-12-05 09:56:03.858837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.858924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.429 [2024-12-05 09:56:03.858936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:16.429 [2024-12-05 09:56:03.858946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:16.429 [2024-12-05 09:56:03.858954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.429 [2024-12-05 09:56:03.860434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.521 ms, result 0 00:24:17.808  [2024-12-05T09:56:06.379Z] Copying: 984/1048576 [kB] (984 kBps) [2024-12-05T09:56:07.320Z] Copying: 20/1024 [MB] (19 MBps) [2024-12-05T09:56:08.349Z] Copying: 42/1024 [MB] (21 MBps) [2024-12-05T09:56:09.294Z] Copying: 63/1024 [MB] (21 MBps) [2024-12-05T09:56:10.237Z] Copying: 84/1024 [MB] (20 MBps) [2024-12-05T09:56:11.191Z] Copying: 105/1024 [MB] (21 MBps) [2024-12-05T09:56:12.131Z] Copying: 120/1024 [MB] (14 MBps) [2024-12-05T09:56:13.076Z] Copying: 144/1024 [MB] (23 MBps) [2024-12-05T09:56:14.464Z] Copying: 162/1024 [MB] (18 MBps) [2024-12-05T09:56:15.406Z] Copying: 177/1024 [MB] (14 MBps) [2024-12-05T09:56:16.348Z] Copying: 193/1024 [MB] (16 MBps) [2024-12-05T09:56:17.291Z] Copying: 204/1024 [MB] (11 MBps) [2024-12-05T09:56:18.262Z] Copying: 214/1024 [MB] (10 MBps) [2024-12-05T09:56:19.207Z] Copying: 225/1024 [MB] (10 MBps) [2024-12-05T09:56:20.153Z] Copying: 236/1024 [MB] (10 MBps) [2024-12-05T09:56:21.097Z] Copying: 250/1024 [MB] (14 MBps) [2024-12-05T09:56:22.485Z] Copying: 261/1024 [MB] (10 MBps) [2024-12-05T09:56:23.429Z] Copying: 272/1024 [MB] (10 MBps) [2024-12-05T09:56:24.372Z] Copying: 282/1024 [MB] (10 MBps) [2024-12-05T09:56:25.314Z] Copying: 307/1024 [MB] (24 MBps) [2024-12-05T09:56:26.257Z] Copying: 338/1024 [MB] (30 MBps) [2024-12-05T09:56:27.200Z] Copying: 355/1024 [MB] (17 MBps) [2024-12-05T09:56:28.143Z] Copying: 372/1024 [MB] (17 MBps) [2024-12-05T09:56:29.085Z] Copying: 386/1024 [MB] (13 MBps) [2024-12-05T09:56:30.468Z] Copying: 405/1024 [MB] (18 MBps) [2024-12-05T09:56:31.412Z] Copying: 420/1024 [MB] (14 MBps) [2024-12-05T09:56:32.353Z] Copying: 433/1024 [MB] (13 MBps) [2024-12-05T09:56:33.296Z] Copying: 444/1024 [MB] (10 MBps) [2024-12-05T09:56:34.237Z] Copying: 460/1024 [MB] (16 MBps) [2024-12-05T09:56:35.178Z] Copying: 476/1024 [MB] (15 MBps) [2024-12-05T09:56:36.115Z] Copying: 487/1024 [MB] (11 MBps) [2024-12-05T09:56:37.495Z] Copying: 503/1024 [MB] (16 MBps) [2024-12-05T09:56:38.064Z] Copying: 516/1024 [MB] (12 MBps) [2024-12-05T09:56:39.468Z] Copying: 526/1024 [MB] (10 MBps) [2024-12-05T09:56:40.086Z] Copying: 540/1024 [MB] (13 MBps) [2024-12-05T09:56:41.470Z] Copying: 558/1024 [MB] (17 MBps) [2024-12-05T09:56:42.413Z] Copying: 570/1024 [MB] (11 MBps) [2024-12-05T09:56:43.354Z] Copying: 585/1024 [MB] (15 MBps) [2024-12-05T09:56:44.297Z] Copying: 598/1024 [MB] (13 MBps) [2024-12-05T09:56:45.237Z] Copying: 613/1024 [MB] (14 MBps) [2024-12-05T09:56:46.177Z] Copying: 626/1024 [MB] (13 MBps) [2024-12-05T09:56:47.119Z] Copying: 640/1024 [MB] (13 MBps) [2024-12-05T09:56:48.505Z] Copying: 660/1024 [MB] (20 MBps) [2024-12-05T09:56:49.079Z] Copying: 679/1024 [MB] (18 MBps) [2024-12-05T09:56:50.467Z] Copying: 689/1024 [MB] (10 MBps) [2024-12-05T09:56:51.412Z] Copying: 704/1024 [MB] (14 MBps) [2024-12-05T09:56:52.357Z] Copying: 723/1024 [MB] (18 MBps) [2024-12-05T09:56:53.301Z] Copying: 741/1024 [MB] (17 MBps) [2024-12-05T09:56:54.245Z] Copying: 754/1024 [MB] (13 MBps) [2024-12-05T09:56:55.187Z] Copying: 772/1024 [MB] (17 MBps) [2024-12-05T09:56:56.131Z] Copying: 783/1024 [MB] (11 MBps) [2024-12-05T09:56:57.074Z] Copying: 794/1024 [MB] (10 MBps) [2024-12-05T09:56:58.463Z] Copying: 810/1024 [MB] (16 MBps) [2024-12-05T09:56:59.408Z] Copying: 827/1024 [MB] (16 MBps) [2024-12-05T09:57:00.353Z] Copying: 843/1024 [MB] (16 MBps) [2024-12-05T09:57:01.297Z] Copying: 861/1024 [MB] (18 MBps) [2024-12-05T09:57:02.242Z] Copying: 872/1024 [MB] (10 MBps) [2024-12-05T09:57:03.187Z] Copying: 887/1024 [MB] (15 MBps) [2024-12-05T09:57:04.130Z] Copying: 907/1024 [MB] (20 MBps) [2024-12-05T09:57:05.075Z] Copying: 928/1024 [MB] (20 MBps) [2024-12-05T09:57:06.457Z] Copying: 943/1024 [MB] (15 MBps) [2024-12-05T09:57:07.402Z] Copying: 954/1024 [MB] (10 MBps) [2024-12-05T09:57:08.345Z] Copying: 964/1024 [MB] (10 MBps) [2024-12-05T09:57:09.288Z] Copying: 975/1024 [MB] (10 MBps) [2024-12-05T09:57:10.231Z] Copying: 986/1024 [MB] (10 MBps) [2024-12-05T09:57:11.185Z] Copying: 996/1024 [MB] (10 MBps) [2024-12-05T09:57:12.160Z] Copying: 1008/1024 [MB] (11 MBps) [2024-12-05T09:57:12.730Z] Copying: 1019/1024 [MB] (11 MBps) [2024-12-05T09:57:12.731Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-05 09:57:12.472277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.102 [2024-12-05 09:57:12.472343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:25.102 [2024-12-05 09:57:12.472367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:25.102 [2024-12-05 09:57:12.472376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.102 [2024-12-05 09:57:12.472398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:25.102 [2024-12-05 09:57:12.475467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.102 [2024-12-05 09:57:12.475671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:25.102 [2024-12-05 09:57:12.475693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:25:25.102 [2024-12-05 09:57:12.475701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.102 [2024-12-05 09:57:12.475923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.102 [2024-12-05 09:57:12.475935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:25.102 [2024-12-05 09:57:12.475945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:25:25.102 [2024-12-05 09:57:12.475961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.102 [2024-12-05 09:57:12.488102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.102 [2024-12-05 09:57:12.488149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:25.102 [2024-12-05 09:57:12.488162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.124 ms 00:25:25.102 [2024-12-05 09:57:12.488170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.102 [2024-12-05 09:57:12.494431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.102 [2024-12-05 09:57:12.494470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:25.102 [2024-12-05 09:57:12.494481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.231 ms 00:25:25.102 [2024-12-05 09:57:12.494497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.102 [2024-12-05 09:57:12.520940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.102 [2024-12-05 09:57:12.520990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:25.102 [2024-12-05 09:57:12.521003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.372 ms 00:25:25.102 [2024-12-05 09:57:12.521011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.102 [2024-12-05 09:57:12.536756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.102 [2024-12-05 09:57:12.536948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:25.102 [2024-12-05 09:57:12.536971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.698 ms 00:25:25.102 [2024-12-05 09:57:12.536981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.362 [2024-12-05 09:57:12.905761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.362 [2024-12-05 09:57:12.905816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:25.363 [2024-12-05 09:57:12.905831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 368.701 ms 00:25:25.363 [2024-12-05 09:57:12.905839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.363 [2024-12-05 09:57:12.932133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.363 [2024-12-05 09:57:12.932328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:25.363 [2024-12-05 09:57:12.932349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.277 ms 00:25:25.363 [2024-12-05 09:57:12.932357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.363 [2024-12-05 09:57:12.956729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.363 [2024-12-05 09:57:12.956786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:25.363 [2024-12-05 09:57:12.956800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.331 ms 00:25:25.363 [2024-12-05 09:57:12.956807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.363 [2024-12-05 09:57:12.981526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.363 [2024-12-05 09:57:12.981571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:25.363 [2024-12-05 09:57:12.981583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.656 ms 00:25:25.363 [2024-12-05 09:57:12.981591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.625 [2024-12-05 09:57:13.006785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.625 [2024-12-05 09:57:13.006950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:25.625 [2024-12-05 09:57:13.006970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.091 ms 00:25:25.625 [2024-12-05 09:57:13.006978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.625 [2024-12-05 09:57:13.007015] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:25.625 [2024-12-05 09:57:13.007032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:25:25.625 [2024-12-05 09:57:13.007043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:25.625 [2024-12-05 09:57:13.007450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:25.626 [2024-12-05 09:57:13.007854] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:25.626 [2024-12-05 09:57:13.007869] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b77a6074-e442-45aa-b19f-66a48a40d8dc 00:25:25.626 [2024-12-05 09:57:13.007878] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:25:25.626 [2024-12-05 09:57:13.007886] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132288 00:25:25.626 [2024-12-05 09:57:13.007893] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131328 00:25:25.626 [2024-12-05 09:57:13.007902] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:25:25.626 [2024-12-05 09:57:13.007915] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:25.626 [2024-12-05 09:57:13.007930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:25.626 [2024-12-05 09:57:13.007938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:25.626 [2024-12-05 09:57:13.007945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:25.626 [2024-12-05 09:57:13.007951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:25.626 [2024-12-05 09:57:13.007958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.626 [2024-12-05 09:57:13.007966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:25.626 [2024-12-05 09:57:13.007975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:25:25.626 [2024-12-05 09:57:13.008005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.021529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.626 [2024-12-05 09:57:13.021568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:25.626 [2024-12-05 09:57:13.021587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.490 ms 00:25:25.626 [2024-12-05 09:57:13.021595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.021982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.626 [2024-12-05 09:57:13.021997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:25.626 [2024-12-05 09:57:13.022007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:25:25.626 [2024-12-05 09:57:13.022015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.058458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.626 [2024-12-05 09:57:13.058542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.626 [2024-12-05 09:57:13.058556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.626 [2024-12-05 09:57:13.058565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.058637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.626 [2024-12-05 09:57:13.058646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.626 [2024-12-05 09:57:13.058656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.626 [2024-12-05 09:57:13.058666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.058731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.626 [2024-12-05 09:57:13.058743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.626 [2024-12-05 09:57:13.058758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.626 [2024-12-05 09:57:13.058766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.058783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.626 [2024-12-05 09:57:13.058792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.626 [2024-12-05 09:57:13.058801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.626 [2024-12-05 09:57:13.058810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.142789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.626 [2024-12-05 09:57:13.142848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.626 [2024-12-05 09:57:13.142861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.626 [2024-12-05 09:57:13.142869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.212303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.626 [2024-12-05 09:57:13.212355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.626 [2024-12-05 09:57:13.212367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.626 [2024-12-05 09:57:13.212376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.212457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.626 [2024-12-05 09:57:13.212467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.626 [2024-12-05 09:57:13.212476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.626 [2024-12-05 09:57:13.212491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.626 [2024-12-05 09:57:13.212555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.626 [2024-12-05 09:57:13.212566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.626 [2024-12-05 09:57:13.212575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.627 [2024-12-05 09:57:13.212584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.627 [2024-12-05 09:57:13.212696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.627 [2024-12-05 09:57:13.212708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.627 [2024-12-05 09:57:13.212718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.627 [2024-12-05 09:57:13.212726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.627 [2024-12-05 09:57:13.212762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.627 [2024-12-05 09:57:13.212772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:25.627 [2024-12-05 09:57:13.212780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.627 [2024-12-05 09:57:13.212789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.627 [2024-12-05 09:57:13.212830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.627 [2024-12-05 09:57:13.212840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.627 [2024-12-05 09:57:13.212849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.627 [2024-12-05 09:57:13.212857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.627 [2024-12-05 09:57:13.212906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.627 [2024-12-05 09:57:13.212916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.627 [2024-12-05 09:57:13.212925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.627 [2024-12-05 09:57:13.212934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.627 [2024-12-05 09:57:13.213071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 740.752 ms, result 0 00:25:26.570 00:25:26.570 00:25:26.570 09:57:13 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:29.120 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77197 00:25:29.120 09:57:16 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77197 ']' 00:25:29.120 09:57:16 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77197 00:25:29.120 Process with pid 77197 is not found 00:25:29.120 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77197) - No such process 00:25:29.120 09:57:16 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77197 is not found' 00:25:29.120 Remove shared memory files 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:29.120 09:57:16 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:29.120 ************************************ 00:25:29.120 END TEST ftl_restore 00:25:29.120 ************************************ 00:25:29.120 00:25:29.120 real 4m47.614s 00:25:29.120 user 4m34.606s 00:25:29.120 sys 0m12.848s 00:25:29.120 09:57:16 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.120 09:57:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:29.120 09:57:16 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:29.120 09:57:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:29.120 09:57:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.120 09:57:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:29.120 ************************************ 00:25:29.120 START TEST ftl_dirty_shutdown 00:25:29.120 ************************************ 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:29.120 * Looking for test storage... 00:25:29.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:29.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.120 --rc genhtml_branch_coverage=1 00:25:29.120 --rc genhtml_function_coverage=1 00:25:29.120 --rc genhtml_legend=1 00:25:29.120 --rc geninfo_all_blocks=1 00:25:29.120 --rc geninfo_unexecuted_blocks=1 00:25:29.120 00:25:29.120 ' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:29.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.120 --rc genhtml_branch_coverage=1 00:25:29.120 --rc genhtml_function_coverage=1 00:25:29.120 --rc genhtml_legend=1 00:25:29.120 --rc geninfo_all_blocks=1 00:25:29.120 --rc geninfo_unexecuted_blocks=1 00:25:29.120 00:25:29.120 ' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:29.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.120 --rc genhtml_branch_coverage=1 00:25:29.120 --rc genhtml_function_coverage=1 00:25:29.120 --rc genhtml_legend=1 00:25:29.120 --rc geninfo_all_blocks=1 00:25:29.120 --rc geninfo_unexecuted_blocks=1 00:25:29.120 00:25:29.120 ' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:29.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.120 --rc genhtml_branch_coverage=1 00:25:29.120 --rc genhtml_function_coverage=1 00:25:29.120 --rc genhtml_legend=1 00:25:29.120 --rc geninfo_all_blocks=1 00:25:29.120 --rc geninfo_unexecuted_blocks=1 00:25:29.120 00:25:29.120 ' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:29.120 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80223 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80223 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80223 ']' 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.121 09:57:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:29.121 [2024-12-05 09:57:16.709972] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:29.121 [2024-12-05 09:57:16.710324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80223 ] 00:25:29.383 [2024-12-05 09:57:16.885263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.644 [2024-12-05 09:57:17.012713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.216 09:57:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.216 09:57:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:30.216 09:57:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:30.216 09:57:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:30.216 09:57:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:30.216 09:57:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:30.216 09:57:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:30.216 09:57:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:30.479 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:30.479 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:30.479 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:30.479 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:30.479 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:30.479 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:30.479 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:30.479 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:30.741 { 00:25:30.741 "name": "nvme0n1", 00:25:30.741 "aliases": [ 00:25:30.741 "108d95f4-874e-4b9e-b731-ab4965cb53c8" 00:25:30.741 ], 00:25:30.741 "product_name": "NVMe disk", 00:25:30.741 "block_size": 4096, 00:25:30.741 "num_blocks": 1310720, 00:25:30.741 "uuid": "108d95f4-874e-4b9e-b731-ab4965cb53c8", 00:25:30.741 "numa_id": -1, 00:25:30.741 "assigned_rate_limits": { 00:25:30.741 "rw_ios_per_sec": 0, 00:25:30.741 "rw_mbytes_per_sec": 0, 00:25:30.741 "r_mbytes_per_sec": 0, 00:25:30.741 "w_mbytes_per_sec": 0 00:25:30.741 }, 00:25:30.741 "claimed": true, 00:25:30.741 "claim_type": "read_many_write_one", 00:25:30.741 "zoned": false, 00:25:30.741 "supported_io_types": { 00:25:30.741 "read": true, 00:25:30.741 "write": true, 00:25:30.741 "unmap": true, 00:25:30.741 "flush": true, 00:25:30.741 "reset": true, 00:25:30.741 "nvme_admin": true, 00:25:30.741 "nvme_io": true, 00:25:30.741 "nvme_io_md": false, 00:25:30.741 "write_zeroes": true, 00:25:30.741 "zcopy": false, 00:25:30.741 "get_zone_info": false, 00:25:30.741 "zone_management": false, 00:25:30.741 "zone_append": false, 00:25:30.741 "compare": true, 00:25:30.741 "compare_and_write": false, 00:25:30.741 "abort": true, 00:25:30.741 "seek_hole": false, 00:25:30.741 "seek_data": false, 00:25:30.741 "copy": true, 00:25:30.741 "nvme_iov_md": false 00:25:30.741 }, 00:25:30.741 "driver_specific": { 00:25:30.741 "nvme": [ 00:25:30.741 { 00:25:30.741 "pci_address": "0000:00:11.0", 00:25:30.741 "trid": { 00:25:30.741 "trtype": "PCIe", 00:25:30.741 "traddr": "0000:00:11.0" 00:25:30.741 }, 00:25:30.741 "ctrlr_data": { 00:25:30.741 "cntlid": 0, 00:25:30.741 "vendor_id": "0x1b36", 00:25:30.741 "model_number": "QEMU NVMe Ctrl", 00:25:30.741 "serial_number": "12341", 00:25:30.741 "firmware_revision": "8.0.0", 00:25:30.741 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:30.741 "oacs": { 00:25:30.741 "security": 0, 00:25:30.741 "format": 1, 00:25:30.741 "firmware": 0, 00:25:30.741 "ns_manage": 1 00:25:30.741 }, 00:25:30.741 "multi_ctrlr": false, 00:25:30.741 "ana_reporting": false 00:25:30.741 }, 00:25:30.741 "vs": { 00:25:30.741 "nvme_version": "1.4" 00:25:30.741 }, 00:25:30.741 "ns_data": { 00:25:30.741 "id": 1, 00:25:30.741 "can_share": false 00:25:30.741 } 00:25:30.741 } 00:25:30.741 ], 00:25:30.741 "mp_policy": "active_passive" 00:25:30.741 } 00:25:30.741 } 00:25:30.741 ]' 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:30.741 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:31.003 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=8fbf820d-d1ef-4174-8b73-59d0b61e32ee 00:25:31.003 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:31.003 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fbf820d-d1ef-4174-8b73-59d0b61e32ee 00:25:31.264 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:31.526 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=8494fc65-f854-4003-a6cd-d8c625c4126a 00:25:31.526 09:57:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8494fc65-f854-4003-a6cd-d8c625c4126a 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:31.787 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:32.049 { 00:25:32.049 "name": "02082ff5-3572-444d-93d4-ebc35aed94d8", 00:25:32.049 "aliases": [ 00:25:32.049 "lvs/nvme0n1p0" 00:25:32.049 ], 00:25:32.049 "product_name": "Logical Volume", 00:25:32.049 "block_size": 4096, 00:25:32.049 "num_blocks": 26476544, 00:25:32.049 "uuid": "02082ff5-3572-444d-93d4-ebc35aed94d8", 00:25:32.049 "assigned_rate_limits": { 00:25:32.049 "rw_ios_per_sec": 0, 00:25:32.049 "rw_mbytes_per_sec": 0, 00:25:32.049 "r_mbytes_per_sec": 0, 00:25:32.049 "w_mbytes_per_sec": 0 00:25:32.049 }, 00:25:32.049 "claimed": false, 00:25:32.049 "zoned": false, 00:25:32.049 "supported_io_types": { 00:25:32.049 "read": true, 00:25:32.049 "write": true, 00:25:32.049 "unmap": true, 00:25:32.049 "flush": false, 00:25:32.049 "reset": true, 00:25:32.049 "nvme_admin": false, 00:25:32.049 "nvme_io": false, 00:25:32.049 "nvme_io_md": false, 00:25:32.049 "write_zeroes": true, 00:25:32.049 "zcopy": false, 00:25:32.049 "get_zone_info": false, 00:25:32.049 "zone_management": false, 00:25:32.049 "zone_append": false, 00:25:32.049 "compare": false, 00:25:32.049 "compare_and_write": false, 00:25:32.049 "abort": false, 00:25:32.049 "seek_hole": true, 00:25:32.049 "seek_data": true, 00:25:32.049 "copy": false, 00:25:32.049 "nvme_iov_md": false 00:25:32.049 }, 00:25:32.049 "driver_specific": { 00:25:32.049 "lvol": { 00:25:32.049 "lvol_store_uuid": "8494fc65-f854-4003-a6cd-d8c625c4126a", 00:25:32.049 "base_bdev": "nvme0n1", 00:25:32.049 "thin_provision": true, 00:25:32.049 "num_allocated_clusters": 0, 00:25:32.049 "snapshot": false, 00:25:32.049 "clone": false, 00:25:32.049 "esnap_clone": false 00:25:32.049 } 00:25:32.049 } 00:25:32.049 } 00:25:32.049 ]' 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:32.049 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:32.311 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:32.311 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:32.311 09:57:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:32.311 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:32.311 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:32.311 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:32.311 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:32.311 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:32.572 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:32.572 { 00:25:32.572 "name": "02082ff5-3572-444d-93d4-ebc35aed94d8", 00:25:32.572 "aliases": [ 00:25:32.572 "lvs/nvme0n1p0" 00:25:32.572 ], 00:25:32.572 "product_name": "Logical Volume", 00:25:32.572 "block_size": 4096, 00:25:32.572 "num_blocks": 26476544, 00:25:32.572 "uuid": "02082ff5-3572-444d-93d4-ebc35aed94d8", 00:25:32.572 "assigned_rate_limits": { 00:25:32.572 "rw_ios_per_sec": 0, 00:25:32.572 "rw_mbytes_per_sec": 0, 00:25:32.572 "r_mbytes_per_sec": 0, 00:25:32.572 "w_mbytes_per_sec": 0 00:25:32.572 }, 00:25:32.572 "claimed": false, 00:25:32.572 "zoned": false, 00:25:32.572 "supported_io_types": { 00:25:32.572 "read": true, 00:25:32.572 "write": true, 00:25:32.572 "unmap": true, 00:25:32.572 "flush": false, 00:25:32.572 "reset": true, 00:25:32.572 "nvme_admin": false, 00:25:32.572 "nvme_io": false, 00:25:32.572 "nvme_io_md": false, 00:25:32.572 "write_zeroes": true, 00:25:32.572 "zcopy": false, 00:25:32.572 "get_zone_info": false, 00:25:32.572 "zone_management": false, 00:25:32.572 "zone_append": false, 00:25:32.572 "compare": false, 00:25:32.572 "compare_and_write": false, 00:25:32.572 "abort": false, 00:25:32.572 "seek_hole": true, 00:25:32.572 "seek_data": true, 00:25:32.572 "copy": false, 00:25:32.572 "nvme_iov_md": false 00:25:32.572 }, 00:25:32.572 "driver_specific": { 00:25:32.572 "lvol": { 00:25:32.572 "lvol_store_uuid": "8494fc65-f854-4003-a6cd-d8c625c4126a", 00:25:32.572 "base_bdev": "nvme0n1", 00:25:32.572 "thin_provision": true, 00:25:32.572 "num_allocated_clusters": 0, 00:25:32.572 "snapshot": false, 00:25:32.572 "clone": false, 00:25:32.572 "esnap_clone": false 00:25:32.572 } 00:25:32.572 } 00:25:32.572 } 00:25:32.572 ]' 00:25:32.572 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:32.572 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:32.572 09:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:32.572 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:32.572 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:32.572 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:32.572 09:57:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:32.572 09:57:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02082ff5-3572-444d-93d4-ebc35aed94d8 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:32.834 { 00:25:32.834 "name": "02082ff5-3572-444d-93d4-ebc35aed94d8", 00:25:32.834 "aliases": [ 00:25:32.834 "lvs/nvme0n1p0" 00:25:32.834 ], 00:25:32.834 "product_name": "Logical Volume", 00:25:32.834 "block_size": 4096, 00:25:32.834 "num_blocks": 26476544, 00:25:32.834 "uuid": "02082ff5-3572-444d-93d4-ebc35aed94d8", 00:25:32.834 "assigned_rate_limits": { 00:25:32.834 "rw_ios_per_sec": 0, 00:25:32.834 "rw_mbytes_per_sec": 0, 00:25:32.834 "r_mbytes_per_sec": 0, 00:25:32.834 "w_mbytes_per_sec": 0 00:25:32.834 }, 00:25:32.834 "claimed": false, 00:25:32.834 "zoned": false, 00:25:32.834 "supported_io_types": { 00:25:32.834 "read": true, 00:25:32.834 "write": true, 00:25:32.834 "unmap": true, 00:25:32.834 "flush": false, 00:25:32.834 "reset": true, 00:25:32.834 "nvme_admin": false, 00:25:32.834 "nvme_io": false, 00:25:32.834 "nvme_io_md": false, 00:25:32.834 "write_zeroes": true, 00:25:32.834 "zcopy": false, 00:25:32.834 "get_zone_info": false, 00:25:32.834 "zone_management": false, 00:25:32.834 "zone_append": false, 00:25:32.834 "compare": false, 00:25:32.834 "compare_and_write": false, 00:25:32.834 "abort": false, 00:25:32.834 "seek_hole": true, 00:25:32.834 "seek_data": true, 00:25:32.834 "copy": false, 00:25:32.834 "nvme_iov_md": false 00:25:32.834 }, 00:25:32.834 "driver_specific": { 00:25:32.834 "lvol": { 00:25:32.834 "lvol_store_uuid": "8494fc65-f854-4003-a6cd-d8c625c4126a", 00:25:32.834 "base_bdev": "nvme0n1", 00:25:32.834 "thin_provision": true, 00:25:32.834 "num_allocated_clusters": 0, 00:25:32.834 "snapshot": false, 00:25:32.834 "clone": false, 00:25:32.834 "esnap_clone": false 00:25:32.834 } 00:25:32.834 } 00:25:32.834 } 00:25:32.834 ]' 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:32.834 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 02082ff5-3572-444d-93d4-ebc35aed94d8 --l2p_dram_limit 10' 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:33.097 09:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 02082ff5-3572-444d-93d4-ebc35aed94d8 --l2p_dram_limit 10 -c nvc0n1p0 00:25:33.097 [2024-12-05 09:57:20.653422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.097 [2024-12-05 09:57:20.653572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:33.097 [2024-12-05 09:57:20.653591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:33.097 [2024-12-05 09:57:20.653598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.097 [2024-12-05 09:57:20.653647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.097 [2024-12-05 09:57:20.653655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.097 [2024-12-05 09:57:20.653663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:33.097 [2024-12-05 09:57:20.653669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.097 [2024-12-05 09:57:20.653689] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:33.097 [2024-12-05 09:57:20.654228] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:33.097 [2024-12-05 09:57:20.654244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.097 [2024-12-05 09:57:20.654250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.097 [2024-12-05 09:57:20.654258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:25:33.097 [2024-12-05 09:57:20.654264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.097 [2024-12-05 09:57:20.654312] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 72490c1c-2963-47bd-a5c9-10ef62f869eb 00:25:33.097 [2024-12-05 09:57:20.655266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.097 [2024-12-05 09:57:20.655285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:33.098 [2024-12-05 09:57:20.655292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:33.098 [2024-12-05 09:57:20.655301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.098 [2024-12-05 09:57:20.660059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.098 [2024-12-05 09:57:20.660091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.098 [2024-12-05 09:57:20.660098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:25:33.098 [2024-12-05 09:57:20.660105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.098 [2024-12-05 09:57:20.660171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.098 [2024-12-05 09:57:20.660180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.098 [2024-12-05 09:57:20.660187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:33.098 [2024-12-05 09:57:20.660196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.098 [2024-12-05 09:57:20.660231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.098 [2024-12-05 09:57:20.660240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:33.098 [2024-12-05 09:57:20.660248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:33.098 [2024-12-05 09:57:20.660255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.098 [2024-12-05 09:57:20.660272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:33.098 [2024-12-05 09:57:20.663182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.098 [2024-12-05 09:57:20.663207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.098 [2024-12-05 09:57:20.663218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:25:33.098 [2024-12-05 09:57:20.663224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.098 [2024-12-05 09:57:20.663251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.098 [2024-12-05 09:57:20.663258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:33.098 [2024-12-05 09:57:20.663265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:33.098 [2024-12-05 09:57:20.663271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.098 [2024-12-05 09:57:20.663285] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:33.098 [2024-12-05 09:57:20.663392] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:33.098 [2024-12-05 09:57:20.663404] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:33.098 [2024-12-05 09:57:20.663412] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:33.098 [2024-12-05 09:57:20.663422] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663428] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663436] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:33.098 [2024-12-05 09:57:20.663442] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:33.098 [2024-12-05 09:57:20.663451] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:33.098 [2024-12-05 09:57:20.663456] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:33.098 [2024-12-05 09:57:20.663463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.098 [2024-12-05 09:57:20.663473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:33.098 [2024-12-05 09:57:20.663481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:25:33.098 [2024-12-05 09:57:20.663487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.098 [2024-12-05 09:57:20.663566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.098 [2024-12-05 09:57:20.663573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:33.098 [2024-12-05 09:57:20.663581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:33.098 [2024-12-05 09:57:20.663586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.098 [2024-12-05 09:57:20.663677] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:33.098 [2024-12-05 09:57:20.663686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:33.098 [2024-12-05 09:57:20.663693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:33.098 [2024-12-05 09:57:20.663712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:33.098 [2024-12-05 09:57:20.663732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:33.098 [2024-12-05 09:57:20.663745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:33.098 [2024-12-05 09:57:20.663752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:33.098 [2024-12-05 09:57:20.663759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:33.098 [2024-12-05 09:57:20.663764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:33.098 [2024-12-05 09:57:20.663771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:33.098 [2024-12-05 09:57:20.663777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:33.098 [2024-12-05 09:57:20.663790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:33.098 [2024-12-05 09:57:20.663807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:33.098 [2024-12-05 09:57:20.663823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:33.098 [2024-12-05 09:57:20.663842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:33.098 [2024-12-05 09:57:20.663862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:33.098 [2024-12-05 09:57:20.663881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:33.098 [2024-12-05 09:57:20.663893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:33.098 [2024-12-05 09:57:20.663898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:33.098 [2024-12-05 09:57:20.663905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:33.098 [2024-12-05 09:57:20.663910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:33.098 [2024-12-05 09:57:20.663916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:33.098 [2024-12-05 09:57:20.663921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:33.098 [2024-12-05 09:57:20.663933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:33.098 [2024-12-05 09:57:20.663939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663945] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:33.098 [2024-12-05 09:57:20.663953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:33.098 [2024-12-05 09:57:20.663966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:33.098 [2024-12-05 09:57:20.663973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.098 [2024-12-05 09:57:20.663979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:33.098 [2024-12-05 09:57:20.663995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:33.098 [2024-12-05 09:57:20.664001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:33.098 [2024-12-05 09:57:20.664007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:33.098 [2024-12-05 09:57:20.664012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:33.098 [2024-12-05 09:57:20.664019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:33.098 [2024-12-05 09:57:20.664025] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:33.098 [2024-12-05 09:57:20.664035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:33.098 [2024-12-05 09:57:20.664042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:33.098 [2024-12-05 09:57:20.664049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:33.099 [2024-12-05 09:57:20.664055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:33.099 [2024-12-05 09:57:20.664062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:33.099 [2024-12-05 09:57:20.664068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:33.099 [2024-12-05 09:57:20.664075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:33.099 [2024-12-05 09:57:20.664081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:33.099 [2024-12-05 09:57:20.664088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:33.099 [2024-12-05 09:57:20.664094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:33.099 [2024-12-05 09:57:20.664102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:33.099 [2024-12-05 09:57:20.664108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:33.099 [2024-12-05 09:57:20.664115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:33.099 [2024-12-05 09:57:20.664120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:33.099 [2024-12-05 09:57:20.664127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:33.099 [2024-12-05 09:57:20.664132] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:33.099 [2024-12-05 09:57:20.664139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:33.099 [2024-12-05 09:57:20.664146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:33.099 [2024-12-05 09:57:20.664153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:33.099 [2024-12-05 09:57:20.664159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:33.099 [2024-12-05 09:57:20.664166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:33.099 [2024-12-05 09:57:20.664173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.099 [2024-12-05 09:57:20.664180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:33.099 [2024-12-05 09:57:20.664186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:25:33.099 [2024-12-05 09:57:20.664192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.099 [2024-12-05 09:57:20.664233] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:33.099 [2024-12-05 09:57:20.664244] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:37.308 [2024-12-05 09:57:24.082284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.082391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:37.308 [2024-12-05 09:57:24.082410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3418.036 ms 00:25:37.308 [2024-12-05 09:57:24.082423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.114121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.114188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.308 [2024-12-05 09:57:24.114203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.432 ms 00:25:37.308 [2024-12-05 09:57:24.114213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.114374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.114390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:37.308 [2024-12-05 09:57:24.114400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:37.308 [2024-12-05 09:57:24.114417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.149396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.149706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.308 [2024-12-05 09:57:24.149727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.942 ms 00:25:37.308 [2024-12-05 09:57:24.149739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.149777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.149794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.308 [2024-12-05 09:57:24.149804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:37.308 [2024-12-05 09:57:24.149823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.150370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.150401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.308 [2024-12-05 09:57:24.150413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:25:37.308 [2024-12-05 09:57:24.150425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.150567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.150582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.308 [2024-12-05 09:57:24.150596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:25:37.308 [2024-12-05 09:57:24.150611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.167749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.167797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.308 [2024-12-05 09:57:24.167808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.118 ms 00:25:37.308 [2024-12-05 09:57:24.167820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.197115] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:37.308 [2024-12-05 09:57:24.201128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.201175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:37.308 [2024-12-05 09:57:24.201191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.218 ms 00:25:37.308 [2024-12-05 09:57:24.201201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.296921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.296982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:37.308 [2024-12-05 09:57:24.297000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.671 ms 00:25:37.308 [2024-12-05 09:57:24.297010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.297220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.297237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:37.308 [2024-12-05 09:57:24.297253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:25:37.308 [2024-12-05 09:57:24.297262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.323313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.323360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:37.308 [2024-12-05 09:57:24.323376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.994 ms 00:25:37.308 [2024-12-05 09:57:24.323385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.348215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.348259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:37.308 [2024-12-05 09:57:24.348274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.767 ms 00:25:37.308 [2024-12-05 09:57:24.348282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.308 [2024-12-05 09:57:24.348978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.308 [2024-12-05 09:57:24.349000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:37.308 [2024-12-05 09:57:24.349014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:25:37.309 [2024-12-05 09:57:24.349025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.309 [2024-12-05 09:57:24.430275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.309 [2024-12-05 09:57:24.430324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:37.309 [2024-12-05 09:57:24.430345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.204 ms 00:25:37.309 [2024-12-05 09:57:24.430354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.309 [2024-12-05 09:57:24.458283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.309 [2024-12-05 09:57:24.458472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:37.309 [2024-12-05 09:57:24.458500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.829 ms 00:25:37.309 [2024-12-05 09:57:24.458526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.309 [2024-12-05 09:57:24.484966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.309 [2024-12-05 09:57:24.485162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:37.309 [2024-12-05 09:57:24.485189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.198 ms 00:25:37.309 [2024-12-05 09:57:24.485198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.309 [2024-12-05 09:57:24.512112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.309 [2024-12-05 09:57:24.512312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:37.309 [2024-12-05 09:57:24.512340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.539 ms 00:25:37.309 [2024-12-05 09:57:24.512350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.309 [2024-12-05 09:57:24.512402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.309 [2024-12-05 09:57:24.512413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:37.309 [2024-12-05 09:57:24.512428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:37.309 [2024-12-05 09:57:24.512436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.309 [2024-12-05 09:57:24.512551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.309 [2024-12-05 09:57:24.512566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:37.309 [2024-12-05 09:57:24.512578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:37.309 [2024-12-05 09:57:24.512586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.309 [2024-12-05 09:57:24.513720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3859.774 ms, result 0 00:25:37.309 { 00:25:37.309 "name": "ftl0", 00:25:37.309 "uuid": "72490c1c-2963-47bd-a5c9-10ef62f869eb" 00:25:37.309 } 00:25:37.309 09:57:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:37.309 09:57:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:37.309 09:57:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:37.309 09:57:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:37.309 09:57:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:37.571 /dev/nbd0 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:37.571 1+0 records in 00:25:37.571 1+0 records out 00:25:37.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288143 s, 14.2 MB/s 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:37.571 09:57:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:37.571 [2024-12-05 09:57:25.094573] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:37.571 [2024-12-05 09:57:25.094916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80367 ] 00:25:37.832 [2024-12-05 09:57:25.259375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.832 [2024-12-05 09:57:25.402708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.217  [2024-12-05T09:57:27.781Z] Copying: 186/1024 [MB] (186 MBps) [2024-12-05T09:57:28.713Z] Copying: 423/1024 [MB] (236 MBps) [2024-12-05T09:57:30.090Z] Copying: 678/1024 [MB] (255 MBps) [2024-12-05T09:57:30.348Z] Copying: 926/1024 [MB] (247 MBps) [2024-12-05T09:57:30.916Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:25:43.288 00:25:43.288 09:57:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:45.190 09:57:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:45.190 [2024-12-05 09:57:32.412193] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:45.190 [2024-12-05 09:57:32.412318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80444 ] 00:25:45.190 [2024-12-05 09:57:32.567210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.190 [2024-12-05 09:57:32.653999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.573  [2024-12-05T09:57:35.137Z] Copying: 33/1024 [MB] (33 MBps) [2024-12-05T09:57:36.069Z] Copying: 63/1024 [MB] (29 MBps) [2024-12-05T09:57:37.005Z] Copying: 74648/1048576 [kB] (9536 kBps) [2024-12-05T09:57:37.940Z] Copying: 83/1024 [MB] (10 MBps) [2024-12-05T09:57:38.875Z] Copying: 96/1024 [MB] (12 MBps) [2024-12-05T09:57:40.250Z] Copying: 111/1024 [MB] (14 MBps) [2024-12-05T09:57:41.185Z] Copying: 146/1024 [MB] (35 MBps) [2024-12-05T09:57:42.118Z] Copying: 165/1024 [MB] (19 MBps) [2024-12-05T09:57:43.052Z] Copying: 187/1024 [MB] (22 MBps) [2024-12-05T09:57:44.148Z] Copying: 223/1024 [MB] (35 MBps) [2024-12-05T09:57:45.083Z] Copying: 236/1024 [MB] (12 MBps) [2024-12-05T09:57:46.025Z] Copying: 248/1024 [MB] (11 MBps) [2024-12-05T09:57:46.961Z] Copying: 258/1024 [MB] (10 MBps) [2024-12-05T09:57:47.895Z] Copying: 287/1024 [MB] (29 MBps) [2024-12-05T09:57:49.269Z] Copying: 322/1024 [MB] (35 MBps) [2024-12-05T09:57:50.203Z] Copying: 343/1024 [MB] (20 MBps) [2024-12-05T09:57:51.137Z] Copying: 354/1024 [MB] (11 MBps) [2024-12-05T09:57:52.072Z] Copying: 367/1024 [MB] (13 MBps) [2024-12-05T09:57:53.009Z] Copying: 377/1024 [MB] (10 MBps) [2024-12-05T09:57:53.945Z] Copying: 389/1024 [MB] (11 MBps) [2024-12-05T09:57:54.881Z] Copying: 402/1024 [MB] (13 MBps) [2024-12-05T09:57:56.253Z] Copying: 414/1024 [MB] (11 MBps) [2024-12-05T09:57:57.187Z] Copying: 446/1024 [MB] (32 MBps) [2024-12-05T09:57:58.122Z] Copying: 481/1024 [MB] (34 MBps) [2024-12-05T09:57:59.054Z] Copying: 497/1024 [MB] (16 MBps) [2024-12-05T09:57:59.988Z] Copying: 510/1024 [MB] (13 MBps) [2024-12-05T09:58:00.924Z] Copying: 534/1024 [MB] (23 MBps) [2024-12-05T09:58:01.859Z] Copying: 555/1024 [MB] (20 MBps) [2024-12-05T09:58:03.235Z] Copying: 577/1024 [MB] (22 MBps) [2024-12-05T09:58:04.171Z] Copying: 593/1024 [MB] (16 MBps) [2024-12-05T09:58:05.106Z] Copying: 619/1024 [MB] (25 MBps) [2024-12-05T09:58:06.038Z] Copying: 636/1024 [MB] (17 MBps) [2024-12-05T09:58:06.970Z] Copying: 654/1024 [MB] (17 MBps) [2024-12-05T09:58:07.903Z] Copying: 672/1024 [MB] (17 MBps) [2024-12-05T09:58:09.275Z] Copying: 688/1024 [MB] (16 MBps) [2024-12-05T09:58:10.208Z] Copying: 704/1024 [MB] (16 MBps) [2024-12-05T09:58:11.141Z] Copying: 738/1024 [MB] (33 MBps) [2024-12-05T09:58:12.077Z] Copying: 773/1024 [MB] (34 MBps) [2024-12-05T09:58:13.012Z] Copying: 808/1024 [MB] (35 MBps) [2024-12-05T09:58:13.946Z] Copying: 842/1024 [MB] (33 MBps) [2024-12-05T09:58:14.952Z] Copying: 862/1024 [MB] (20 MBps) [2024-12-05T09:58:15.903Z] Copying: 876/1024 [MB] (14 MBps) [2024-12-05T09:58:17.278Z] Copying: 893/1024 [MB] (16 MBps) [2024-12-05T09:58:18.213Z] Copying: 924/1024 [MB] (30 MBps) [2024-12-05T09:58:19.148Z] Copying: 948/1024 [MB] (24 MBps) [2024-12-05T09:58:20.083Z] Copying: 966/1024 [MB] (17 MBps) [2024-12-05T09:58:21.018Z] Copying: 984/1024 [MB] (17 MBps) [2024-12-05T09:58:21.585Z] Copying: 1000/1024 [MB] (16 MBps) [2024-12-05T09:58:22.154Z] Copying: 1024/1024 [MB] (average 21 MBps) 00:26:34.525 00:26:34.525 09:58:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:34.525 09:58:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:34.784 09:58:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:35.045 [2024-12-05 09:58:22.473526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.045 [2024-12-05 09:58:22.473559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:35.045 [2024-12-05 09:58:22.473570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:35.045 [2024-12-05 09:58:22.473578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.045 [2024-12-05 09:58:22.473598] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:35.045 [2024-12-05 09:58:22.475567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.045 [2024-12-05 09:58:22.475590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:35.045 [2024-12-05 09:58:22.475600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:26:35.045 [2024-12-05 09:58:22.475607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.045 [2024-12-05 09:58:22.477671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.045 [2024-12-05 09:58:22.477697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:35.045 [2024-12-05 09:58:22.477706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.042 ms 00:26:35.045 [2024-12-05 09:58:22.477713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.045 [2024-12-05 09:58:22.492615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.045 [2024-12-05 09:58:22.492734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:35.045 [2024-12-05 09:58:22.492751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.884 ms 00:26:35.045 [2024-12-05 09:58:22.492757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.045 [2024-12-05 09:58:22.497617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.045 [2024-12-05 09:58:22.497640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:35.045 [2024-12-05 09:58:22.497649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.832 ms 00:26:35.045 [2024-12-05 09:58:22.497655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.045 [2024-12-05 09:58:22.515889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.045 [2024-12-05 09:58:22.515991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:35.045 [2024-12-05 09:58:22.516007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.179 ms 00:26:35.045 [2024-12-05 09:58:22.516019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.045 [2024-12-05 09:58:22.528837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.046 [2024-12-05 09:58:22.528863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:35.046 [2024-12-05 09:58:22.528875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.777 ms 00:26:35.046 [2024-12-05 09:58:22.528882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.046 [2024-12-05 09:58:22.529019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.046 [2024-12-05 09:58:22.529028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:35.046 [2024-12-05 09:58:22.529036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:26:35.046 [2024-12-05 09:58:22.529043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.046 [2024-12-05 09:58:22.547679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.046 [2024-12-05 09:58:22.547703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:35.046 [2024-12-05 09:58:22.547713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.621 ms 00:26:35.046 [2024-12-05 09:58:22.547719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.046 [2024-12-05 09:58:22.565892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.046 [2024-12-05 09:58:22.565916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:35.046 [2024-12-05 09:58:22.565925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.144 ms 00:26:35.046 [2024-12-05 09:58:22.565931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.046 [2024-12-05 09:58:22.582792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.046 [2024-12-05 09:58:22.582895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:35.046 [2024-12-05 09:58:22.582910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.831 ms 00:26:35.046 [2024-12-05 09:58:22.582916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.046 [2024-12-05 09:58:22.600387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.046 [2024-12-05 09:58:22.600411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:35.046 [2024-12-05 09:58:22.600419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.417 ms 00:26:35.046 [2024-12-05 09:58:22.600425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.046 [2024-12-05 09:58:22.600453] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:35.046 [2024-12-05 09:58:22.600464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:35.046 [2024-12-05 09:58:22.600923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.600995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:35.047 [2024-12-05 09:58:22.601137] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:35.047 [2024-12-05 09:58:22.601145] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 72490c1c-2963-47bd-a5c9-10ef62f869eb 00:26:35.047 [2024-12-05 09:58:22.601151] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:35.047 [2024-12-05 09:58:22.601159] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:35.047 [2024-12-05 09:58:22.601167] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:35.047 [2024-12-05 09:58:22.601174] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:35.047 [2024-12-05 09:58:22.601179] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:35.047 [2024-12-05 09:58:22.601186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:35.047 [2024-12-05 09:58:22.601192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:35.047 [2024-12-05 09:58:22.601198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:35.047 [2024-12-05 09:58:22.601203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:35.047 [2024-12-05 09:58:22.601209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.047 [2024-12-05 09:58:22.601215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:35.047 [2024-12-05 09:58:22.601223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:26:35.047 [2024-12-05 09:58:22.601229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.047 [2024-12-05 09:58:22.610752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.047 [2024-12-05 09:58:22.610775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:35.047 [2024-12-05 09:58:22.610784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.499 ms 00:26:35.047 [2024-12-05 09:58:22.610790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.047 [2024-12-05 09:58:22.611057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.047 [2024-12-05 09:58:22.611065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:35.047 [2024-12-05 09:58:22.611072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:26:35.047 [2024-12-05 09:58:22.611078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.047 [2024-12-05 09:58:22.643590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.047 [2024-12-05 09:58:22.643615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:35.047 [2024-12-05 09:58:22.643625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.047 [2024-12-05 09:58:22.643631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.047 [2024-12-05 09:58:22.643673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.047 [2024-12-05 09:58:22.643680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:35.047 [2024-12-05 09:58:22.643687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.047 [2024-12-05 09:58:22.643693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.047 [2024-12-05 09:58:22.643772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.047 [2024-12-05 09:58:22.643782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:35.047 [2024-12-05 09:58:22.643789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.047 [2024-12-05 09:58:22.643795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.047 [2024-12-05 09:58:22.643811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.047 [2024-12-05 09:58:22.643817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:35.047 [2024-12-05 09:58:22.643824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.047 [2024-12-05 09:58:22.643829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.308 [2024-12-05 09:58:22.702283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.308 [2024-12-05 09:58:22.702427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:35.308 [2024-12-05 09:58:22.702443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.308 [2024-12-05 09:58:22.702450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.308 [2024-12-05 09:58:22.749937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.308 [2024-12-05 09:58:22.749967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:35.308 [2024-12-05 09:58:22.749976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.308 [2024-12-05 09:58:22.749983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.308 [2024-12-05 09:58:22.750038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.308 [2024-12-05 09:58:22.750046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:35.308 [2024-12-05 09:58:22.750056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.308 [2024-12-05 09:58:22.750062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.309 [2024-12-05 09:58:22.750107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.309 [2024-12-05 09:58:22.750115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:35.309 [2024-12-05 09:58:22.750122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.309 [2024-12-05 09:58:22.750128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.309 [2024-12-05 09:58:22.750200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.309 [2024-12-05 09:58:22.750208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:35.309 [2024-12-05 09:58:22.750216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.309 [2024-12-05 09:58:22.750223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.309 [2024-12-05 09:58:22.750247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.309 [2024-12-05 09:58:22.750254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:35.309 [2024-12-05 09:58:22.750261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.309 [2024-12-05 09:58:22.750267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.309 [2024-12-05 09:58:22.750295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.309 [2024-12-05 09:58:22.750303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:35.309 [2024-12-05 09:58:22.750310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.309 [2024-12-05 09:58:22.750317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.309 [2024-12-05 09:58:22.750352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.309 [2024-12-05 09:58:22.750360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:35.309 [2024-12-05 09:58:22.750367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.309 [2024-12-05 09:58:22.750373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.309 [2024-12-05 09:58:22.750470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.931 ms, result 0 00:26:35.309 true 00:26:35.309 09:58:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80223 00:26:35.309 09:58:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80223 00:26:35.309 09:58:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:35.309 [2024-12-05 09:58:22.816826] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:35.309 [2024-12-05 09:58:22.816910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80970 ] 00:26:35.569 [2024-12-05 09:58:22.960484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.569 [2024-12-05 09:58:23.034421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.951  [2024-12-05T09:58:25.521Z] Copying: 258/1024 [MB] (258 MBps) [2024-12-05T09:58:26.460Z] Copying: 515/1024 [MB] (257 MBps) [2024-12-05T09:58:27.400Z] Copying: 773/1024 [MB] (257 MBps) [2024-12-05T09:58:27.972Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:26:40.343 00:26:40.343 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80223 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:40.343 09:58:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:40.343 [2024-12-05 09:58:27.850248] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:40.343 [2024-12-05 09:58:27.850339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81022 ] 00:26:40.603 [2024-12-05 09:58:27.998775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.603 [2024-12-05 09:58:28.074928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.863 [2024-12-05 09:58:28.282835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:40.863 [2024-12-05 09:58:28.282889] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:40.863 [2024-12-05 09:58:28.346604] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:40.863 [2024-12-05 09:58:28.347115] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:40.863 [2024-12-05 09:58:28.347841] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:41.126 [2024-12-05 09:58:28.545902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.545941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:41.126 [2024-12-05 09:58:28.545954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:41.126 [2024-12-05 09:58:28.545964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.546009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.546020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.126 [2024-12-05 09:58:28.546028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:41.126 [2024-12-05 09:58:28.546035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.546051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:41.126 [2024-12-05 09:58:28.546747] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:41.126 [2024-12-05 09:58:28.546764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.546772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.126 [2024-12-05 09:58:28.546781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:26:41.126 [2024-12-05 09:58:28.546788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.547868] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:41.126 [2024-12-05 09:58:28.560613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.560647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:41.126 [2024-12-05 09:58:28.560659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.746 ms 00:26:41.126 [2024-12-05 09:58:28.560666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.560718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.560727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:41.126 [2024-12-05 09:58:28.560736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:41.126 [2024-12-05 09:58:28.560743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.565761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.565919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.126 [2024-12-05 09:58:28.565934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.963 ms 00:26:41.126 [2024-12-05 09:58:28.565941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.566012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.566021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.126 [2024-12-05 09:58:28.566028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:41.126 [2024-12-05 09:58:28.566038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.566079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.566089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:41.126 [2024-12-05 09:58:28.566097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:41.126 [2024-12-05 09:58:28.566104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.566123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:41.126 [2024-12-05 09:58:28.569378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.569491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.126 [2024-12-05 09:58:28.569506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:26:41.126 [2024-12-05 09:58:28.569530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.569563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.126 [2024-12-05 09:58:28.569571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:41.126 [2024-12-05 09:58:28.569580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:41.126 [2024-12-05 09:58:28.569589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.126 [2024-12-05 09:58:28.569608] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:41.126 [2024-12-05 09:58:28.569627] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:41.126 [2024-12-05 09:58:28.569661] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:41.126 [2024-12-05 09:58:28.569677] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:41.126 [2024-12-05 09:58:28.569779] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:41.126 [2024-12-05 09:58:28.569789] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:41.126 [2024-12-05 09:58:28.569802] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:41.126 [2024-12-05 09:58:28.569811] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:41.126 [2024-12-05 09:58:28.569820] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:41.126 [2024-12-05 09:58:28.569829] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:41.126 [2024-12-05 09:58:28.569836] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:41.126 [2024-12-05 09:58:28.569843] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:41.126 [2024-12-05 09:58:28.569850] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:41.126 [2024-12-05 09:58:28.569858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.127 [2024-12-05 09:58:28.569865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:41.127 [2024-12-05 09:58:28.569874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:26:41.127 [2024-12-05 09:58:28.569881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.127 [2024-12-05 09:58:28.569964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.127 [2024-12-05 09:58:28.569973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:41.127 [2024-12-05 09:58:28.569981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:41.127 [2024-12-05 09:58:28.569988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.127 [2024-12-05 09:58:28.570100] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:41.127 [2024-12-05 09:58:28.570111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:41.127 [2024-12-05 09:58:28.570119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:41.127 [2024-12-05 09:58:28.570143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:41.127 [2024-12-05 09:58:28.570165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.127 [2024-12-05 09:58:28.570183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:41.127 [2024-12-05 09:58:28.570190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:41.127 [2024-12-05 09:58:28.570197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.127 [2024-12-05 09:58:28.570203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:41.127 [2024-12-05 09:58:28.570210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:41.127 [2024-12-05 09:58:28.570218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:41.127 [2024-12-05 09:58:28.570231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:41.127 [2024-12-05 09:58:28.570252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:41.127 [2024-12-05 09:58:28.570271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:41.127 [2024-12-05 09:58:28.570291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:41.127 [2024-12-05 09:58:28.570309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:41.127 [2024-12-05 09:58:28.570328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.127 [2024-12-05 09:58:28.570341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:41.127 [2024-12-05 09:58:28.570348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:41.127 [2024-12-05 09:58:28.570355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.127 [2024-12-05 09:58:28.570361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:41.127 [2024-12-05 09:58:28.570367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:41.127 [2024-12-05 09:58:28.570373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:41.127 [2024-12-05 09:58:28.570386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:41.127 [2024-12-05 09:58:28.570392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570399] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:41.127 [2024-12-05 09:58:28.570408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:41.127 [2024-12-05 09:58:28.570415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.127 [2024-12-05 09:58:28.570430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:41.127 [2024-12-05 09:58:28.570437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:41.127 [2024-12-05 09:58:28.570444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:41.127 [2024-12-05 09:58:28.570451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:41.127 [2024-12-05 09:58:28.570457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:41.127 [2024-12-05 09:58:28.570463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:41.127 [2024-12-05 09:58:28.570471] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:41.127 [2024-12-05 09:58:28.570479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.127 [2024-12-05 09:58:28.570487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:41.127 [2024-12-05 09:58:28.570495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:41.127 [2024-12-05 09:58:28.570502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:41.127 [2024-12-05 09:58:28.570522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:41.127 [2024-12-05 09:58:28.570530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:41.127 [2024-12-05 09:58:28.570536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:41.127 [2024-12-05 09:58:28.570545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:41.127 [2024-12-05 09:58:28.570552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:41.127 [2024-12-05 09:58:28.570559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:41.127 [2024-12-05 09:58:28.570566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:41.127 [2024-12-05 09:58:28.570574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:41.127 [2024-12-05 09:58:28.570581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:41.127 [2024-12-05 09:58:28.570588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:41.128 [2024-12-05 09:58:28.570595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:41.128 [2024-12-05 09:58:28.570601] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:41.128 [2024-12-05 09:58:28.570609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.128 [2024-12-05 09:58:28.570619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:41.128 [2024-12-05 09:58:28.570626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:41.128 [2024-12-05 09:58:28.570633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:41.128 [2024-12-05 09:58:28.570640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:41.128 [2024-12-05 09:58:28.570648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.570655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:41.128 [2024-12-05 09:58:28.570663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:26:41.128 [2024-12-05 09:58:28.570672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.597237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.597359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:41.128 [2024-12-05 09:58:28.597412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.513 ms 00:26:41.128 [2024-12-05 09:58:28.597440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.597552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.597576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:41.128 [2024-12-05 09:58:28.597596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:41.128 [2024-12-05 09:58:28.597650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.639627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.639775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:41.128 [2024-12-05 09:58:28.639834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.909 ms 00:26:41.128 [2024-12-05 09:58:28.639858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.639911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.639936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.128 [2024-12-05 09:58:28.639958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:41.128 [2024-12-05 09:58:28.639976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.640418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.640533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.128 [2024-12-05 09:58:28.640594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:26:41.128 [2024-12-05 09:58:28.640616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.640779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.640842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.128 [2024-12-05 09:58:28.640885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:26:41.128 [2024-12-05 09:58:28.640906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.654388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.654503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.128 [2024-12-05 09:58:28.654564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.450 ms 00:26:41.128 [2024-12-05 09:58:28.654586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.667602] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:41.128 [2024-12-05 09:58:28.667735] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:41.128 [2024-12-05 09:58:28.667794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.667816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:41.128 [2024-12-05 09:58:28.667837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.101 ms 00:26:41.128 [2024-12-05 09:58:28.667856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.692456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.692624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:41.128 [2024-12-05 09:58:28.692685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.554 ms 00:26:41.128 [2024-12-05 09:58:28.692710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.705115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.705260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:41.128 [2024-12-05 09:58:28.705314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.347 ms 00:26:41.128 [2024-12-05 09:58:28.705337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.717491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.717643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:41.128 [2024-12-05 09:58:28.717699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.107 ms 00:26:41.128 [2024-12-05 09:58:28.717721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.128 [2024-12-05 09:58:28.718374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.128 [2024-12-05 09:58:28.718490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:41.128 [2024-12-05 09:58:28.718563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:26:41.128 [2024-12-05 09:58:28.718586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.782136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.389 [2024-12-05 09:58:28.782381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:41.389 [2024-12-05 09:58:28.782451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.512 ms 00:26:41.389 [2024-12-05 09:58:28.782476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.793725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:41.389 [2024-12-05 09:58:28.796833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.389 [2024-12-05 09:58:28.796985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:41.389 [2024-12-05 09:58:28.797055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.890 ms 00:26:41.389 [2024-12-05 09:58:28.797079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.797192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.389 [2024-12-05 09:58:28.797223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:41.389 [2024-12-05 09:58:28.797248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:41.389 [2024-12-05 09:58:28.797267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.797356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.389 [2024-12-05 09:58:28.797559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:41.389 [2024-12-05 09:58:28.797588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:41.389 [2024-12-05 09:58:28.797615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.797659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.389 [2024-12-05 09:58:28.797686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:41.389 [2024-12-05 09:58:28.797718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:41.389 [2024-12-05 09:58:28.797741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.797789] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:41.389 [2024-12-05 09:58:28.797861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.389 [2024-12-05 09:58:28.797882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:41.389 [2024-12-05 09:58:28.797908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:41.389 [2024-12-05 09:58:28.797928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.823914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.389 [2024-12-05 09:58:28.824111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:41.389 [2024-12-05 09:58:28.824176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.951 ms 00:26:41.389 [2024-12-05 09:58:28.824665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.824785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.389 [2024-12-05 09:58:28.824803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:41.389 [2024-12-05 09:58:28.824814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:41.389 [2024-12-05 09:58:28.824831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.389 [2024-12-05 09:58:28.826643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.216 ms, result 0 00:26:42.333  [2024-12-05T09:58:30.907Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-05T09:58:31.852Z] Copying: 30/1024 [MB] (15 MBps) [2024-12-05T09:58:33.241Z] Copying: 45/1024 [MB] (14 MBps) [2024-12-05T09:58:34.185Z] Copying: 59/1024 [MB] (13 MBps) [2024-12-05T09:58:35.130Z] Copying: 76/1024 [MB] (17 MBps) [2024-12-05T09:58:36.073Z] Copying: 96/1024 [MB] (19 MBps) [2024-12-05T09:58:37.020Z] Copying: 125/1024 [MB] (29 MBps) [2024-12-05T09:58:37.962Z] Copying: 144/1024 [MB] (18 MBps) [2024-12-05T09:58:38.907Z] Copying: 164/1024 [MB] (20 MBps) [2024-12-05T09:58:39.852Z] Copying: 176/1024 [MB] (11 MBps) [2024-12-05T09:58:41.239Z] Copying: 207/1024 [MB] (30 MBps) [2024-12-05T09:58:41.904Z] Copying: 219/1024 [MB] (11 MBps) [2024-12-05T09:58:42.848Z] Copying: 229/1024 [MB] (10 MBps) [2024-12-05T09:58:44.234Z] Copying: 240/1024 [MB] (10 MBps) [2024-12-05T09:58:45.180Z] Copying: 253/1024 [MB] (12 MBps) [2024-12-05T09:58:46.119Z] Copying: 266/1024 [MB] (12 MBps) [2024-12-05T09:58:47.062Z] Copying: 280/1024 [MB] (14 MBps) [2024-12-05T09:58:48.008Z] Copying: 291/1024 [MB] (10 MBps) [2024-12-05T09:58:48.977Z] Copying: 302/1024 [MB] (11 MBps) [2024-12-05T09:58:49.916Z] Copying: 321/1024 [MB] (19 MBps) [2024-12-05T09:58:50.911Z] Copying: 354/1024 [MB] (32 MBps) [2024-12-05T09:58:51.851Z] Copying: 378/1024 [MB] (23 MBps) [2024-12-05T09:58:53.237Z] Copying: 401/1024 [MB] (23 MBps) [2024-12-05T09:58:54.179Z] Copying: 429/1024 [MB] (27 MBps) [2024-12-05T09:58:55.122Z] Copying: 443/1024 [MB] (14 MBps) [2024-12-05T09:58:56.062Z] Copying: 468/1024 [MB] (24 MBps) [2024-12-05T09:58:57.004Z] Copying: 501/1024 [MB] (33 MBps) [2024-12-05T09:58:57.948Z] Copying: 522/1024 [MB] (20 MBps) [2024-12-05T09:58:58.893Z] Copying: 546/1024 [MB] (24 MBps) [2024-12-05T09:59:00.280Z] Copying: 576/1024 [MB] (29 MBps) [2024-12-05T09:59:00.849Z] Copying: 587/1024 [MB] (10 MBps) [2024-12-05T09:59:02.236Z] Copying: 626/1024 [MB] (39 MBps) [2024-12-05T09:59:03.180Z] Copying: 674/1024 [MB] (47 MBps) [2024-12-05T09:59:04.127Z] Copying: 688/1024 [MB] (13 MBps) [2024-12-05T09:59:05.071Z] Copying: 716/1024 [MB] (27 MBps) [2024-12-05T09:59:06.011Z] Copying: 744/1024 [MB] (28 MBps) [2024-12-05T09:59:06.954Z] Copying: 772/1024 [MB] (28 MBps) [2024-12-05T09:59:07.896Z] Copying: 799/1024 [MB] (26 MBps) [2024-12-05T09:59:08.840Z] Copying: 830/1024 [MB] (30 MBps) [2024-12-05T09:59:10.225Z] Copying: 858/1024 [MB] (27 MBps) [2024-12-05T09:59:11.171Z] Copying: 877/1024 [MB] (19 MBps) [2024-12-05T09:59:12.195Z] Copying: 890/1024 [MB] (13 MBps) [2024-12-05T09:59:13.139Z] Copying: 912/1024 [MB] (21 MBps) [2024-12-05T09:59:14.082Z] Copying: 930/1024 [MB] (17 MBps) [2024-12-05T09:59:15.027Z] Copying: 946/1024 [MB] (16 MBps) [2024-12-05T09:59:15.965Z] Copying: 961/1024 [MB] (14 MBps) [2024-12-05T09:59:16.908Z] Copying: 977/1024 [MB] (15 MBps) [2024-12-05T09:59:17.850Z] Copying: 988/1024 [MB] (11 MBps) [2024-12-05T09:59:19.237Z] Copying: 998/1024 [MB] (10 MBps) [2024-12-05T09:59:20.181Z] Copying: 1009/1024 [MB] (10 MBps) [2024-12-05T09:59:21.125Z] Copying: 1020/1024 [MB] (10 MBps) [2024-12-05T09:59:21.125Z] Copying: 1048456/1048576 [kB] (3648 kBps) [2024-12-05T09:59:21.125Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-05 09:59:20.954485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.496 [2024-12-05 09:59:20.954582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.496 [2024-12-05 09:59:20.954599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:33.496 [2024-12-05 09:59:20.954608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.496 [2024-12-05 09:59:20.956727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.496 [2024-12-05 09:59:20.962751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.496 [2024-12-05 09:59:20.962929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.496 [2024-12-05 09:59:20.963013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.615 ms 00:27:33.496 [2024-12-05 09:59:20.963034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.496 [2024-12-05 09:59:20.971732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.496 [2024-12-05 09:59:20.971881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.496 [2024-12-05 09:59:20.971943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.031 ms 00:27:33.496 [2024-12-05 09:59:20.971954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.496 [2024-12-05 09:59:20.995937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.496 [2024-12-05 09:59:20.995992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.496 [2024-12-05 09:59:20.996003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.962 ms 00:27:33.496 [2024-12-05 09:59:20.996010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.496 [2024-12-05 09:59:21.000877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.496 [2024-12-05 09:59:21.000910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.496 [2024-12-05 09:59:21.000919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.821 ms 00:27:33.496 [2024-12-05 09:59:21.000925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.496 [2024-12-05 09:59:21.021034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.496 [2024-12-05 09:59:21.021071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.496 [2024-12-05 09:59:21.021081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.072 ms 00:27:33.496 [2024-12-05 09:59:21.021088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.496 [2024-12-05 09:59:21.033057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.496 [2024-12-05 09:59:21.033191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.496 [2024-12-05 09:59:21.033207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.935 ms 00:27:33.496 [2024-12-05 09:59:21.033214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.496 [2024-12-05 09:59:21.089935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.496 [2024-12-05 09:59:21.089969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.497 [2024-12-05 09:59:21.089977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.690 ms 00:27:33.497 [2024-12-05 09:59:21.089983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.497 [2024-12-05 09:59:21.109165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.497 [2024-12-05 09:59:21.109191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.497 [2024-12-05 09:59:21.109198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.170 ms 00:27:33.497 [2024-12-05 09:59:21.109212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.759 [2024-12-05 09:59:21.127752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.759 [2024-12-05 09:59:21.127854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.759 [2024-12-05 09:59:21.127867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.514 ms 00:27:33.759 [2024-12-05 09:59:21.127873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.759 [2024-12-05 09:59:21.145853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.759 [2024-12-05 09:59:21.145881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:33.759 [2024-12-05 09:59:21.145889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.957 ms 00:27:33.759 [2024-12-05 09:59:21.145895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.759 [2024-12-05 09:59:21.163494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.759 [2024-12-05 09:59:21.163529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:33.759 [2024-12-05 09:59:21.163538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.555 ms 00:27:33.759 [2024-12-05 09:59:21.163544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.759 [2024-12-05 09:59:21.163570] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:33.759 [2024-12-05 09:59:21.163580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108800 / 261120 wr_cnt: 1 state: open 00:27:33.759 [2024-12-05 09:59:21.163588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:33.759 [2024-12-05 09:59:21.163595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:33.759 [2024-12-05 09:59:21.163601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:33.759 [2024-12-05 09:59:21.163607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:33.759 [2024-12-05 09:59:21.163612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.163995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:33.760 [2024-12-05 09:59:21.164153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:33.761 [2024-12-05 09:59:21.164159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:33.761 [2024-12-05 09:59:21.164165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:33.761 [2024-12-05 09:59:21.164171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:33.761 [2024-12-05 09:59:21.164177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:33.761 [2024-12-05 09:59:21.164183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:33.761 [2024-12-05 09:59:21.164195] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:33.761 [2024-12-05 09:59:21.164203] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 72490c1c-2963-47bd-a5c9-10ef62f869eb 00:27:33.761 [2024-12-05 09:59:21.164214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108800 00:27:33.761 [2024-12-05 09:59:21.164220] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109760 00:27:33.761 [2024-12-05 09:59:21.164225] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108800 00:27:33.761 [2024-12-05 09:59:21.164232] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:27:33.761 [2024-12-05 09:59:21.164238] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:33.761 [2024-12-05 09:59:21.164244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:33.761 [2024-12-05 09:59:21.164251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:33.761 [2024-12-05 09:59:21.164257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:33.761 [2024-12-05 09:59:21.164262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:33.761 [2024-12-05 09:59:21.164267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.761 [2024-12-05 09:59:21.164273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:33.761 [2024-12-05 09:59:21.164279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:27:33.761 [2024-12-05 09:59:21.164284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.173748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.761 [2024-12-05 09:59:21.173771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.761 [2024-12-05 09:59:21.173779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.450 ms 00:27:33.761 [2024-12-05 09:59:21.173785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.174050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.761 [2024-12-05 09:59:21.174059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.761 [2024-12-05 09:59:21.174068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:27:33.761 [2024-12-05 09:59:21.174075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.199896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.199922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.761 [2024-12-05 09:59:21.199930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.199937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.199976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.199982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.761 [2024-12-05 09:59:21.199991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.199996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.200047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.200055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.761 [2024-12-05 09:59:21.200061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.200067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.200079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.200086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.761 [2024-12-05 09:59:21.200092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.200100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.258676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.258818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.761 [2024-12-05 09:59:21.258833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.258840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.306878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.306908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.761 [2024-12-05 09:59:21.306919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.306926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.306980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.306987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.761 [2024-12-05 09:59:21.306993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.306999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.307025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.307032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.761 [2024-12-05 09:59:21.307038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.307044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.307115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.307124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.761 [2024-12-05 09:59:21.307130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.307136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.307160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.307167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:33.761 [2024-12-05 09:59:21.307173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.307179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.307208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.307215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.761 [2024-12-05 09:59:21.307221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.307227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.307258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.761 [2024-12-05 09:59:21.307265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.761 [2024-12-05 09:59:21.307272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.761 [2024-12-05 09:59:21.307278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.761 [2024-12-05 09:59:21.307366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.687 ms, result 0 00:27:35.143 00:27:35.143 00:27:35.143 09:59:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:37.692 09:59:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:37.692 [2024-12-05 09:59:24.968221] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:27:37.692 [2024-12-05 09:59:24.969081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81600 ] 00:27:37.692 [2024-12-05 09:59:25.135188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.692 [2024-12-05 09:59:25.252031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.953 [2024-12-05 09:59:25.548204] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:37.953 [2024-12-05 09:59:25.548284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:38.214 [2024-12-05 09:59:25.710563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.214 [2024-12-05 09:59:25.710624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:38.214 [2024-12-05 09:59:25.710640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:38.214 [2024-12-05 09:59:25.710650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.214 [2024-12-05 09:59:25.710708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.214 [2024-12-05 09:59:25.710722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:38.215 [2024-12-05 09:59:25.710731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:38.215 [2024-12-05 09:59:25.710739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.710761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:38.215 [2024-12-05 09:59:25.711452] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:38.215 [2024-12-05 09:59:25.711471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.711480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:38.215 [2024-12-05 09:59:25.711490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:27:38.215 [2024-12-05 09:59:25.711499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.713194] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:38.215 [2024-12-05 09:59:25.727811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.727861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:38.215 [2024-12-05 09:59:25.727875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.619 ms 00:27:38.215 [2024-12-05 09:59:25.727884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.727965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.727977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:38.215 [2024-12-05 09:59:25.727985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:38.215 [2024-12-05 09:59:25.727993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.736124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.736167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:38.215 [2024-12-05 09:59:25.736178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.029 ms 00:27:38.215 [2024-12-05 09:59:25.736191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.736274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.736284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:38.215 [2024-12-05 09:59:25.736293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:38.215 [2024-12-05 09:59:25.736301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.736345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.736356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:38.215 [2024-12-05 09:59:25.736366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:38.215 [2024-12-05 09:59:25.736374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.736400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:38.215 [2024-12-05 09:59:25.740476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.740532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:38.215 [2024-12-05 09:59:25.740547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:27:38.215 [2024-12-05 09:59:25.740556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.740600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.740611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:38.215 [2024-12-05 09:59:25.740620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:38.215 [2024-12-05 09:59:25.740628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.740678] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:38.215 [2024-12-05 09:59:25.740703] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:38.215 [2024-12-05 09:59:25.740740] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:38.215 [2024-12-05 09:59:25.740760] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:38.215 [2024-12-05 09:59:25.740867] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:38.215 [2024-12-05 09:59:25.740885] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:38.215 [2024-12-05 09:59:25.740896] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:38.215 [2024-12-05 09:59:25.740907] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:38.215 [2024-12-05 09:59:25.740918] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:38.215 [2024-12-05 09:59:25.740927] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:38.215 [2024-12-05 09:59:25.740936] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:38.215 [2024-12-05 09:59:25.740948] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:38.215 [2024-12-05 09:59:25.740958] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:38.215 [2024-12-05 09:59:25.740967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.740975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:38.215 [2024-12-05 09:59:25.740984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:27:38.215 [2024-12-05 09:59:25.740994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.741077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.215 [2024-12-05 09:59:25.741087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:38.215 [2024-12-05 09:59:25.741096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:38.215 [2024-12-05 09:59:25.741105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.215 [2024-12-05 09:59:25.741211] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:38.215 [2024-12-05 09:59:25.741223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:38.215 [2024-12-05 09:59:25.741232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.215 [2024-12-05 09:59:25.741241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:38.215 [2024-12-05 09:59:25.741257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:38.215 [2024-12-05 09:59:25.741271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:38.215 [2024-12-05 09:59:25.741278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.215 [2024-12-05 09:59:25.741295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:38.215 [2024-12-05 09:59:25.741302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:38.215 [2024-12-05 09:59:25.741308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.215 [2024-12-05 09:59:25.741322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:38.215 [2024-12-05 09:59:25.741330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:38.215 [2024-12-05 09:59:25.741337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:38.215 [2024-12-05 09:59:25.741356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:38.215 [2024-12-05 09:59:25.741363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:38.215 [2024-12-05 09:59:25.741377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.215 [2024-12-05 09:59:25.741391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:38.215 [2024-12-05 09:59:25.741398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.215 [2024-12-05 09:59:25.741414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:38.215 [2024-12-05 09:59:25.741421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.215 [2024-12-05 09:59:25.741434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:38.215 [2024-12-05 09:59:25.741442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.215 [2024-12-05 09:59:25.741456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:38.215 [2024-12-05 09:59:25.741464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.215 [2024-12-05 09:59:25.741478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:38.215 [2024-12-05 09:59:25.741484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:38.215 [2024-12-05 09:59:25.741491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.215 [2024-12-05 09:59:25.741498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:38.215 [2024-12-05 09:59:25.741505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:38.215 [2024-12-05 09:59:25.741531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:38.215 [2024-12-05 09:59:25.741547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:38.215 [2024-12-05 09:59:25.741555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.215 [2024-12-05 09:59:25.741561] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:38.215 [2024-12-05 09:59:25.741569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:38.216 [2024-12-05 09:59:25.741577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.216 [2024-12-05 09:59:25.741584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.216 [2024-12-05 09:59:25.741592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:38.216 [2024-12-05 09:59:25.741603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:38.216 [2024-12-05 09:59:25.741612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:38.216 [2024-12-05 09:59:25.741619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:38.216 [2024-12-05 09:59:25.741627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:38.216 [2024-12-05 09:59:25.741634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:38.216 [2024-12-05 09:59:25.741643] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:38.216 [2024-12-05 09:59:25.741654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.216 [2024-12-05 09:59:25.741669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:38.216 [2024-12-05 09:59:25.741677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:38.216 [2024-12-05 09:59:25.741685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:38.216 [2024-12-05 09:59:25.741693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:38.216 [2024-12-05 09:59:25.741701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:38.216 [2024-12-05 09:59:25.741708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:38.216 [2024-12-05 09:59:25.741716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:38.216 [2024-12-05 09:59:25.741725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:38.216 [2024-12-05 09:59:25.741733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:38.216 [2024-12-05 09:59:25.741741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:38.216 [2024-12-05 09:59:25.741748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:38.216 [2024-12-05 09:59:25.741755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:38.216 [2024-12-05 09:59:25.741762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:38.216 [2024-12-05 09:59:25.741769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:38.216 [2024-12-05 09:59:25.741778] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:38.216 [2024-12-05 09:59:25.741786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.216 [2024-12-05 09:59:25.741795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:38.216 [2024-12-05 09:59:25.741803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:38.216 [2024-12-05 09:59:25.741811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:38.216 [2024-12-05 09:59:25.741818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:38.216 [2024-12-05 09:59:25.741828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.216 [2024-12-05 09:59:25.741835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:38.216 [2024-12-05 09:59:25.741844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:27:38.216 [2024-12-05 09:59:25.741851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.216 [2024-12-05 09:59:25.773423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.216 [2024-12-05 09:59:25.773470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.216 [2024-12-05 09:59:25.773483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.526 ms 00:27:38.216 [2024-12-05 09:59:25.773496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.216 [2024-12-05 09:59:25.773614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.216 [2024-12-05 09:59:25.773626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:38.216 [2024-12-05 09:59:25.773634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:38.216 [2024-12-05 09:59:25.773643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.216 [2024-12-05 09:59:25.819030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.216 [2024-12-05 09:59:25.819083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.216 [2024-12-05 09:59:25.819097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.326 ms 00:27:38.216 [2024-12-05 09:59:25.819106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.216 [2024-12-05 09:59:25.819153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.216 [2024-12-05 09:59:25.819165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.216 [2024-12-05 09:59:25.819178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:38.216 [2024-12-05 09:59:25.819186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.216 [2024-12-05 09:59:25.819802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.216 [2024-12-05 09:59:25.819827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.216 [2024-12-05 09:59:25.819840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:27:38.216 [2024-12-05 09:59:25.819849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.216 [2024-12-05 09:59:25.820002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.216 [2024-12-05 09:59:25.820016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.216 [2024-12-05 09:59:25.820030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:27:38.216 [2024-12-05 09:59:25.820055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.216 [2024-12-05 09:59:25.835580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.216 [2024-12-05 09:59:25.835625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.216 [2024-12-05 09:59:25.835636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.503 ms 00:27:38.216 [2024-12-05 09:59:25.835645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.849553] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:38.477 [2024-12-05 09:59:25.849600] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:38.477 [2024-12-05 09:59:25.849614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.849624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:38.477 [2024-12-05 09:59:25.849634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.859 ms 00:27:38.477 [2024-12-05 09:59:25.849643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.875574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.875623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:38.477 [2024-12-05 09:59:25.875636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:27:38.477 [2024-12-05 09:59:25.875644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.888646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.888852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:38.477 [2024-12-05 09:59:25.888872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.949 ms 00:27:38.477 [2024-12-05 09:59:25.888882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.901562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.901603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:38.477 [2024-12-05 09:59:25.901615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.641 ms 00:27:38.477 [2024-12-05 09:59:25.901623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.902266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.902294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:38.477 [2024-12-05 09:59:25.902309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:27:38.477 [2024-12-05 09:59:25.902318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.967095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.967161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:38.477 [2024-12-05 09:59:25.967185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.757 ms 00:27:38.477 [2024-12-05 09:59:25.967194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.979266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:38.477 [2024-12-05 09:59:25.982503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.982555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:38.477 [2024-12-05 09:59:25.982570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.253 ms 00:27:38.477 [2024-12-05 09:59:25.982580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.982671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.982686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:38.477 [2024-12-05 09:59:25.982699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:38.477 [2024-12-05 09:59:25.982708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.984442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.984488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:38.477 [2024-12-05 09:59:25.984500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.694 ms 00:27:38.477 [2024-12-05 09:59:25.984529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.984560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.984570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:38.477 [2024-12-05 09:59:25.984580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:38.477 [2024-12-05 09:59:25.984595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:25.984636] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:38.477 [2024-12-05 09:59:25.984649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:25.984658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:38.477 [2024-12-05 09:59:25.984667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:38.477 [2024-12-05 09:59:25.984676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:26.009922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:26.010122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:38.477 [2024-12-05 09:59:26.010151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.226 ms 00:27:38.477 [2024-12-05 09:59:26.010161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:26.010243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.477 [2024-12-05 09:59:26.010254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:38.477 [2024-12-05 09:59:26.010264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:38.477 [2024-12-05 09:59:26.010272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.477 [2024-12-05 09:59:26.011814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.736 ms, result 0 00:27:39.870  [2024-12-05T09:59:28.445Z] Copying: 1032/1048576 [kB] (1032 kBps) [2024-12-05T09:59:29.391Z] Copying: 4184/1048576 [kB] (3152 kBps) [2024-12-05T09:59:30.335Z] Copying: 13680/1048576 [kB] (9496 kBps) [2024-12-05T09:59:31.281Z] Copying: 29/1024 [MB] (15 MBps) [2024-12-05T09:59:32.221Z] Copying: 49/1024 [MB] (20 MBps) [2024-12-05T09:59:33.607Z] Copying: 69/1024 [MB] (20 MBps) [2024-12-05T09:59:34.551Z] Copying: 88/1024 [MB] (18 MBps) [2024-12-05T09:59:35.495Z] Copying: 111/1024 [MB] (22 MBps) [2024-12-05T09:59:36.435Z] Copying: 132/1024 [MB] (21 MBps) [2024-12-05T09:59:37.378Z] Copying: 147/1024 [MB] (15 MBps) [2024-12-05T09:59:38.323Z] Copying: 175/1024 [MB] (27 MBps) [2024-12-05T09:59:39.269Z] Copying: 196/1024 [MB] (20 MBps) [2024-12-05T09:59:40.224Z] Copying: 225/1024 [MB] (29 MBps) [2024-12-05T09:59:41.242Z] Copying: 241/1024 [MB] (15 MBps) [2024-12-05T09:59:42.627Z] Copying: 260/1024 [MB] (19 MBps) [2024-12-05T09:59:43.571Z] Copying: 279/1024 [MB] (18 MBps) [2024-12-05T09:59:44.515Z] Copying: 308/1024 [MB] (29 MBps) [2024-12-05T09:59:45.458Z] Copying: 335/1024 [MB] (26 MBps) [2024-12-05T09:59:46.400Z] Copying: 354/1024 [MB] (19 MBps) [2024-12-05T09:59:47.342Z] Copying: 377/1024 [MB] (22 MBps) [2024-12-05T09:59:48.287Z] Copying: 400/1024 [MB] (23 MBps) [2024-12-05T09:59:49.231Z] Copying: 421/1024 [MB] (20 MBps) [2024-12-05T09:59:50.616Z] Copying: 444/1024 [MB] (23 MBps) [2024-12-05T09:59:51.563Z] Copying: 465/1024 [MB] (20 MBps) [2024-12-05T09:59:52.507Z] Copying: 484/1024 [MB] (19 MBps) [2024-12-05T09:59:53.453Z] Copying: 505/1024 [MB] (21 MBps) [2024-12-05T09:59:54.398Z] Copying: 530/1024 [MB] (25 MBps) [2024-12-05T09:59:55.341Z] Copying: 551/1024 [MB] (20 MBps) [2024-12-05T09:59:56.282Z] Copying: 570/1024 [MB] (18 MBps) [2024-12-05T09:59:57.217Z] Copying: 586/1024 [MB] (16 MBps) [2024-12-05T09:59:58.599Z] Copying: 626/1024 [MB] (39 MBps) [2024-12-05T09:59:59.545Z] Copying: 655/1024 [MB] (29 MBps) [2024-12-05T10:00:00.488Z] Copying: 678/1024 [MB] (22 MBps) [2024-12-05T10:00:01.430Z] Copying: 702/1024 [MB] (23 MBps) [2024-12-05T10:00:02.368Z] Copying: 722/1024 [MB] (20 MBps) [2024-12-05T10:00:03.313Z] Copying: 742/1024 [MB] (19 MBps) [2024-12-05T10:00:04.251Z] Copying: 763/1024 [MB] (21 MBps) [2024-12-05T10:00:05.237Z] Copying: 786/1024 [MB] (22 MBps) [2024-12-05T10:00:06.605Z] Copying: 807/1024 [MB] (21 MBps) [2024-12-05T10:00:07.543Z] Copying: 843/1024 [MB] (35 MBps) [2024-12-05T10:00:08.483Z] Copying: 862/1024 [MB] (19 MBps) [2024-12-05T10:00:09.419Z] Copying: 878/1024 [MB] (15 MBps) [2024-12-05T10:00:10.444Z] Copying: 898/1024 [MB] (19 MBps) [2024-12-05T10:00:11.381Z] Copying: 917/1024 [MB] (19 MBps) [2024-12-05T10:00:12.324Z] Copying: 935/1024 [MB] (18 MBps) [2024-12-05T10:00:13.268Z] Copying: 955/1024 [MB] (19 MBps) [2024-12-05T10:00:14.214Z] Copying: 981/1024 [MB] (25 MBps) [2024-12-05T10:00:15.606Z] Copying: 1008/1024 [MB] (26 MBps) [2024-12-05T10:00:15.606Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-05 10:00:15.441771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.442034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:27.977 [2024-12-05 10:00:15.442062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:27.977 [2024-12-05 10:00:15.442078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.442119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:27.977 [2024-12-05 10:00:15.447463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.447534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:27.977 [2024-12-05 10:00:15.447550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.317 ms 00:28:27.977 [2024-12-05 10:00:15.447561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.447995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.448033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:27.977 [2024-12-05 10:00:15.448047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:28:27.977 [2024-12-05 10:00:15.448099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.460937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.460995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:27.977 [2024-12-05 10:00:15.461009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.812 ms 00:28:27.977 [2024-12-05 10:00:15.461018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.467568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.467620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:27.977 [2024-12-05 10:00:15.467634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.510 ms 00:28:27.977 [2024-12-05 10:00:15.467649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.494975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.495217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:27.977 [2024-12-05 10:00:15.495241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.261 ms 00:28:27.977 [2024-12-05 10:00:15.495250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.516722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.516774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:27.977 [2024-12-05 10:00:15.516789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.048 ms 00:28:27.977 [2024-12-05 10:00:15.516798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.521406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.521452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:27.977 [2024-12-05 10:00:15.521472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.553 ms 00:28:27.977 [2024-12-05 10:00:15.521482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.546782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.546824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:27.977 [2024-12-05 10:00:15.546836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.284 ms 00:28:27.977 [2024-12-05 10:00:15.546844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.571907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.571952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:27.977 [2024-12-05 10:00:15.571964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.018 ms 00:28:27.977 [2024-12-05 10:00:15.571972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.977 [2024-12-05 10:00:15.596366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.977 [2024-12-05 10:00:15.596406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:27.977 [2024-12-05 10:00:15.596419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.353 ms 00:28:27.977 [2024-12-05 10:00:15.596427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.240 [2024-12-05 10:00:15.620914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.240 [2024-12-05 10:00:15.620956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:28.240 [2024-12-05 10:00:15.620968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.416 ms 00:28:28.240 [2024-12-05 10:00:15.620976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.240 [2024-12-05 10:00:15.621018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:28.240 [2024-12-05 10:00:15.621035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:28.240 [2024-12-05 10:00:15.621046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:28.240 [2024-12-05 10:00:15.621055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:28.240 [2024-12-05 10:00:15.621313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:28.241 [2024-12-05 10:00:15.621896] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:28.241 [2024-12-05 10:00:15.621905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 72490c1c-2963-47bd-a5c9-10ef62f869eb 00:28:28.241 [2024-12-05 10:00:15.621915] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:28.241 [2024-12-05 10:00:15.621929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 155840 00:28:28.241 [2024-12-05 10:00:15.621937] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 153856 00:28:28.241 [2024-12-05 10:00:15.621945] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0129 00:28:28.241 [2024-12-05 10:00:15.621953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:28.241 [2024-12-05 10:00:15.621969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:28.241 [2024-12-05 10:00:15.621976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:28.241 [2024-12-05 10:00:15.621984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:28.241 [2024-12-05 10:00:15.621991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:28.241 [2024-12-05 10:00:15.621998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.241 [2024-12-05 10:00:15.622006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:28.241 [2024-12-05 10:00:15.622015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:28:28.241 [2024-12-05 10:00:15.622023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.241 [2024-12-05 10:00:15.635470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.241 [2024-12-05 10:00:15.635533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:28.241 [2024-12-05 10:00:15.635546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.428 ms 00:28:28.241 [2024-12-05 10:00:15.635554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.241 [2024-12-05 10:00:15.635968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.241 [2024-12-05 10:00:15.635986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:28.241 [2024-12-05 10:00:15.635996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:28:28.241 [2024-12-05 10:00:15.636010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.241 [2024-12-05 10:00:15.672264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.241 [2024-12-05 10:00:15.672309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.241 [2024-12-05 10:00:15.672322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.241 [2024-12-05 10:00:15.672331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.241 [2024-12-05 10:00:15.672394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.241 [2024-12-05 10:00:15.672403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.241 [2024-12-05 10:00:15.672411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.241 [2024-12-05 10:00:15.672426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.241 [2024-12-05 10:00:15.672537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.241 [2024-12-05 10:00:15.672551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.241 [2024-12-05 10:00:15.672560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.241 [2024-12-05 10:00:15.672569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.241 [2024-12-05 10:00:15.672586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.241 [2024-12-05 10:00:15.672594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.241 [2024-12-05 10:00:15.672603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.241 [2024-12-05 10:00:15.672612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.241 [2024-12-05 10:00:15.758293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.241 [2024-12-05 10:00:15.758343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.242 [2024-12-05 10:00:15.758357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.242 [2024-12-05 10:00:15.758367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.242 [2024-12-05 10:00:15.827664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.242 [2024-12-05 10:00:15.827714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.242 [2024-12-05 10:00:15.827727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.242 [2024-12-05 10:00:15.827736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.242 [2024-12-05 10:00:15.827805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.242 [2024-12-05 10:00:15.827815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.242 [2024-12-05 10:00:15.827824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.242 [2024-12-05 10:00:15.827832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.242 [2024-12-05 10:00:15.827888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.242 [2024-12-05 10:00:15.827898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.242 [2024-12-05 10:00:15.827907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.242 [2024-12-05 10:00:15.827916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.242 [2024-12-05 10:00:15.828009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.242 [2024-12-05 10:00:15.828026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.242 [2024-12-05 10:00:15.828035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.242 [2024-12-05 10:00:15.828043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.242 [2024-12-05 10:00:15.828091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.242 [2024-12-05 10:00:15.828102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:28.242 [2024-12-05 10:00:15.828111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.242 [2024-12-05 10:00:15.828119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.242 [2024-12-05 10:00:15.828162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.242 [2024-12-05 10:00:15.828175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.242 [2024-12-05 10:00:15.828184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.242 [2024-12-05 10:00:15.828192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.242 [2024-12-05 10:00:15.828242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.242 [2024-12-05 10:00:15.828254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.242 [2024-12-05 10:00:15.828264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.242 [2024-12-05 10:00:15.828272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.242 [2024-12-05 10:00:15.828413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.621 ms, result 0 00:28:29.185 00:28:29.186 00:28:29.186 10:00:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:31.103 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:31.103 10:00:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:31.103 [2024-12-05 10:00:18.686208] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:31.103 [2024-12-05 10:00:18.686327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82147 ] 00:28:31.363 [2024-12-05 10:00:18.842228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.363 [2024-12-05 10:00:18.921085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.623 [2024-12-05 10:00:19.130406] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.623 [2024-12-05 10:00:19.130457] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.886 [2024-12-05 10:00:19.282555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.886 [2024-12-05 10:00:19.282590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:31.886 [2024-12-05 10:00:19.282600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:31.886 [2024-12-05 10:00:19.282606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.886 [2024-12-05 10:00:19.282640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.886 [2024-12-05 10:00:19.282649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:31.886 [2024-12-05 10:00:19.282655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:31.886 [2024-12-05 10:00:19.282660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.886 [2024-12-05 10:00:19.282673] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:31.886 [2024-12-05 10:00:19.283202] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:31.886 [2024-12-05 10:00:19.283213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.886 [2024-12-05 10:00:19.283219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:31.886 [2024-12-05 10:00:19.283225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:28:31.886 [2024-12-05 10:00:19.283231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.886 [2024-12-05 10:00:19.284205] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:31.886 [2024-12-05 10:00:19.293739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.886 [2024-12-05 10:00:19.293766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:31.886 [2024-12-05 10:00:19.293774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.535 ms 00:28:31.886 [2024-12-05 10:00:19.293781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.886 [2024-12-05 10:00:19.293825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.886 [2024-12-05 10:00:19.293832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:31.886 [2024-12-05 10:00:19.293838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:31.886 [2024-12-05 10:00:19.293844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.886 [2024-12-05 10:00:19.298145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.886 [2024-12-05 10:00:19.298169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:31.886 [2024-12-05 10:00:19.298176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.256 ms 00:28:31.886 [2024-12-05 10:00:19.298185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.886 [2024-12-05 10:00:19.298238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.886 [2024-12-05 10:00:19.298245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:31.886 [2024-12-05 10:00:19.298251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:31.886 [2024-12-05 10:00:19.298257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.886 [2024-12-05 10:00:19.298287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.886 [2024-12-05 10:00:19.298294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:31.886 [2024-12-05 10:00:19.298301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:31.886 [2024-12-05 10:00:19.298306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.886 [2024-12-05 10:00:19.298321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:31.886 [2024-12-05 10:00:19.300960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.887 [2024-12-05 10:00:19.300981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:31.887 [2024-12-05 10:00:19.300991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:28:31.887 [2024-12-05 10:00:19.300997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.887 [2024-12-05 10:00:19.301024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.887 [2024-12-05 10:00:19.301031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:31.887 [2024-12-05 10:00:19.301037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:31.887 [2024-12-05 10:00:19.301042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.887 [2024-12-05 10:00:19.301056] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:31.887 [2024-12-05 10:00:19.301070] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:31.887 [2024-12-05 10:00:19.301097] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:31.887 [2024-12-05 10:00:19.301109] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:31.887 [2024-12-05 10:00:19.301188] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:31.887 [2024-12-05 10:00:19.301196] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:31.887 [2024-12-05 10:00:19.301203] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:31.887 [2024-12-05 10:00:19.301210] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301217] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301223] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:31.887 [2024-12-05 10:00:19.301229] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:31.887 [2024-12-05 10:00:19.301236] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:31.887 [2024-12-05 10:00:19.301242] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:31.887 [2024-12-05 10:00:19.301248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.887 [2024-12-05 10:00:19.301253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:31.887 [2024-12-05 10:00:19.301260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:28:31.887 [2024-12-05 10:00:19.301265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.887 [2024-12-05 10:00:19.301327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.887 [2024-12-05 10:00:19.301333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:31.887 [2024-12-05 10:00:19.301338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:31.887 [2024-12-05 10:00:19.301344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.887 [2024-12-05 10:00:19.301420] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:31.887 [2024-12-05 10:00:19.301428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:31.887 [2024-12-05 10:00:19.301434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:31.887 [2024-12-05 10:00:19.301451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:31.887 [2024-12-05 10:00:19.301467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.887 [2024-12-05 10:00:19.301477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:31.887 [2024-12-05 10:00:19.301482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:31.887 [2024-12-05 10:00:19.301487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.887 [2024-12-05 10:00:19.301496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:31.887 [2024-12-05 10:00:19.301502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:31.887 [2024-12-05 10:00:19.301524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:31.887 [2024-12-05 10:00:19.301536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:31.887 [2024-12-05 10:00:19.301551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:31.887 [2024-12-05 10:00:19.301567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:31.887 [2024-12-05 10:00:19.301582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:31.887 [2024-12-05 10:00:19.301598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:31.887 [2024-12-05 10:00:19.301613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.887 [2024-12-05 10:00:19.301624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:31.887 [2024-12-05 10:00:19.301629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:31.887 [2024-12-05 10:00:19.301634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.887 [2024-12-05 10:00:19.301639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:31.887 [2024-12-05 10:00:19.301644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:31.887 [2024-12-05 10:00:19.301649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:31.887 [2024-12-05 10:00:19.301659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:31.887 [2024-12-05 10:00:19.301664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301669] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:31.887 [2024-12-05 10:00:19.301675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:31.887 [2024-12-05 10:00:19.301681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.887 [2024-12-05 10:00:19.301692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:31.887 [2024-12-05 10:00:19.301697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:31.887 [2024-12-05 10:00:19.301703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:31.887 [2024-12-05 10:00:19.301708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:31.887 [2024-12-05 10:00:19.301713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:31.887 [2024-12-05 10:00:19.301718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:31.887 [2024-12-05 10:00:19.301724] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:31.887 [2024-12-05 10:00:19.301731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.887 [2024-12-05 10:00:19.301739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:31.887 [2024-12-05 10:00:19.301744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:31.887 [2024-12-05 10:00:19.301750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:31.887 [2024-12-05 10:00:19.301755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:31.887 [2024-12-05 10:00:19.301760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:31.887 [2024-12-05 10:00:19.301765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:31.887 [2024-12-05 10:00:19.301771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:31.887 [2024-12-05 10:00:19.301776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:31.887 [2024-12-05 10:00:19.301781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:31.887 [2024-12-05 10:00:19.301787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:31.887 [2024-12-05 10:00:19.301792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:31.887 [2024-12-05 10:00:19.301797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:31.887 [2024-12-05 10:00:19.301802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:31.887 [2024-12-05 10:00:19.301808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:31.887 [2024-12-05 10:00:19.301813] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:31.887 [2024-12-05 10:00:19.301819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.888 [2024-12-05 10:00:19.301825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:31.888 [2024-12-05 10:00:19.301831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:31.888 [2024-12-05 10:00:19.301836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:31.888 [2024-12-05 10:00:19.301841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:31.888 [2024-12-05 10:00:19.301847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.301852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:31.888 [2024-12-05 10:00:19.301858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:28:31.888 [2024-12-05 10:00:19.301863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.322744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.322839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:31.888 [2024-12-05 10:00:19.322894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.848 ms 00:28:31.888 [2024-12-05 10:00:19.322917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.322991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.323007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:31.888 [2024-12-05 10:00:19.323022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:31.888 [2024-12-05 10:00:19.323066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.362895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.363005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:31.888 [2024-12-05 10:00:19.363062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.777 ms 00:28:31.888 [2024-12-05 10:00:19.363098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.363140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.363221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:31.888 [2024-12-05 10:00:19.363252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:31.888 [2024-12-05 10:00:19.363294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.363622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.363681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:31.888 [2024-12-05 10:00:19.363951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:28:31.888 [2024-12-05 10:00:19.364025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.364161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.364184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:31.888 [2024-12-05 10:00:19.364224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:31.888 [2024-12-05 10:00:19.364247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.374632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.374719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:31.888 [2024-12-05 10:00:19.374761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.359 ms 00:28:31.888 [2024-12-05 10:00:19.374778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.384482] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:31.888 [2024-12-05 10:00:19.384599] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:31.888 [2024-12-05 10:00:19.384648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.384664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:31.888 [2024-12-05 10:00:19.384954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.774 ms 00:28:31.888 [2024-12-05 10:00:19.384987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.403638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.403733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:31.888 [2024-12-05 10:00:19.403746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.600 ms 00:28:31.888 [2024-12-05 10:00:19.403752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.412567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.412592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:31.888 [2024-12-05 10:00:19.412599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.782 ms 00:28:31.888 [2024-12-05 10:00:19.412604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.421172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.421195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:31.888 [2024-12-05 10:00:19.421203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.541 ms 00:28:31.888 [2024-12-05 10:00:19.421208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.421672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.421748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:31.888 [2024-12-05 10:00:19.421762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:28:31.888 [2024-12-05 10:00:19.421768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.465340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.465455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:31.888 [2024-12-05 10:00:19.465474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.558 ms 00:28:31.888 [2024-12-05 10:00:19.465480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.473392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:31.888 [2024-12-05 10:00:19.475150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.475174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:31.888 [2024-12-05 10:00:19.475183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.626 ms 00:28:31.888 [2024-12-05 10:00:19.475195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.475248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.475256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:31.888 [2024-12-05 10:00:19.475265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:31.888 [2024-12-05 10:00:19.475272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.475761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.475775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:31.888 [2024-12-05 10:00:19.475783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:28:31.888 [2024-12-05 10:00:19.475789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.475806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.475812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:31.888 [2024-12-05 10:00:19.475818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:31.888 [2024-12-05 10:00:19.475824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.475861] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:31.888 [2024-12-05 10:00:19.475869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.475875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:31.888 [2024-12-05 10:00:19.475882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:31.888 [2024-12-05 10:00:19.475887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.493527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.493553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:31.888 [2024-12-05 10:00:19.493564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.627 ms 00:28:31.888 [2024-12-05 10:00:19.493570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.493621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.888 [2024-12-05 10:00:19.493628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:31.888 [2024-12-05 10:00:19.493634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:31.888 [2024-12-05 10:00:19.493640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.888 [2024-12-05 10:00:19.494346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.467 ms, result 0 00:28:33.275  [2024-12-05T10:00:21.848Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-05T10:00:22.792Z] Copying: 42/1024 [MB] (15 MBps) [2024-12-05T10:00:23.737Z] Copying: 60/1024 [MB] (18 MBps) [2024-12-05T10:00:24.683Z] Copying: 80/1024 [MB] (20 MBps) [2024-12-05T10:00:26.064Z] Copying: 94/1024 [MB] (13 MBps) [2024-12-05T10:00:26.634Z] Copying: 111/1024 [MB] (17 MBps) [2024-12-05T10:00:28.021Z] Copying: 127/1024 [MB] (16 MBps) [2024-12-05T10:00:28.965Z] Copying: 141/1024 [MB] (14 MBps) [2024-12-05T10:00:29.910Z] Copying: 158/1024 [MB] (16 MBps) [2024-12-05T10:00:30.855Z] Copying: 178/1024 [MB] (19 MBps) [2024-12-05T10:00:31.800Z] Copying: 192/1024 [MB] (14 MBps) [2024-12-05T10:00:32.745Z] Copying: 209/1024 [MB] (16 MBps) [2024-12-05T10:00:33.688Z] Copying: 229/1024 [MB] (20 MBps) [2024-12-05T10:00:34.630Z] Copying: 249/1024 [MB] (19 MBps) [2024-12-05T10:00:36.017Z] Copying: 263/1024 [MB] (14 MBps) [2024-12-05T10:00:36.961Z] Copying: 275/1024 [MB] (12 MBps) [2024-12-05T10:00:37.907Z] Copying: 298/1024 [MB] (22 MBps) [2024-12-05T10:00:38.945Z] Copying: 310/1024 [MB] (11 MBps) [2024-12-05T10:00:39.888Z] Copying: 322/1024 [MB] (12 MBps) [2024-12-05T10:00:40.834Z] Copying: 342/1024 [MB] (20 MBps) [2024-12-05T10:00:41.779Z] Copying: 364/1024 [MB] (21 MBps) [2024-12-05T10:00:42.722Z] Copying: 377/1024 [MB] (12 MBps) [2024-12-05T10:00:43.665Z] Copying: 387/1024 [MB] (10 MBps) [2024-12-05T10:00:45.049Z] Copying: 398/1024 [MB] (10 MBps) [2024-12-05T10:00:45.989Z] Copying: 409/1024 [MB] (10 MBps) [2024-12-05T10:00:46.929Z] Copying: 420/1024 [MB] (10 MBps) [2024-12-05T10:00:47.873Z] Copying: 430/1024 [MB] (10 MBps) [2024-12-05T10:00:48.825Z] Copying: 446/1024 [MB] (15 MBps) [2024-12-05T10:00:49.769Z] Copying: 457/1024 [MB] (11 MBps) [2024-12-05T10:00:50.711Z] Copying: 468/1024 [MB] (10 MBps) [2024-12-05T10:00:51.655Z] Copying: 479/1024 [MB] (10 MBps) [2024-12-05T10:00:53.045Z] Copying: 490/1024 [MB] (10 MBps) [2024-12-05T10:00:53.989Z] Copying: 502/1024 [MB] (12 MBps) [2024-12-05T10:00:54.933Z] Copying: 513/1024 [MB] (10 MBps) [2024-12-05T10:00:55.876Z] Copying: 530/1024 [MB] (17 MBps) [2024-12-05T10:00:56.821Z] Copying: 544/1024 [MB] (13 MBps) [2024-12-05T10:00:57.764Z] Copying: 562/1024 [MB] (17 MBps) [2024-12-05T10:00:58.710Z] Copying: 578/1024 [MB] (16 MBps) [2024-12-05T10:00:59.653Z] Copying: 592/1024 [MB] (13 MBps) [2024-12-05T10:01:01.042Z] Copying: 608/1024 [MB] (15 MBps) [2024-12-05T10:01:01.987Z] Copying: 629/1024 [MB] (20 MBps) [2024-12-05T10:01:02.932Z] Copying: 644/1024 [MB] (15 MBps) [2024-12-05T10:01:03.876Z] Copying: 664/1024 [MB] (19 MBps) [2024-12-05T10:01:04.818Z] Copying: 683/1024 [MB] (19 MBps) [2024-12-05T10:01:05.759Z] Copying: 705/1024 [MB] (22 MBps) [2024-12-05T10:01:06.706Z] Copying: 724/1024 [MB] (18 MBps) [2024-12-05T10:01:07.707Z] Copying: 752/1024 [MB] (27 MBps) [2024-12-05T10:01:08.654Z] Copying: 766/1024 [MB] (14 MBps) [2024-12-05T10:01:10.043Z] Copying: 777/1024 [MB] (10 MBps) [2024-12-05T10:01:10.989Z] Copying: 787/1024 [MB] (10 MBps) [2024-12-05T10:01:11.933Z] Copying: 803/1024 [MB] (16 MBps) [2024-12-05T10:01:12.877Z] Copying: 817/1024 [MB] (14 MBps) [2024-12-05T10:01:13.820Z] Copying: 832/1024 [MB] (14 MBps) [2024-12-05T10:01:14.767Z] Copying: 843/1024 [MB] (11 MBps) [2024-12-05T10:01:15.708Z] Copying: 854/1024 [MB] (10 MBps) [2024-12-05T10:01:16.650Z] Copying: 865/1024 [MB] (10 MBps) [2024-12-05T10:01:18.034Z] Copying: 876/1024 [MB] (10 MBps) [2024-12-05T10:01:18.975Z] Copying: 886/1024 [MB] (10 MBps) [2024-12-05T10:01:19.918Z] Copying: 907/1024 [MB] (21 MBps) [2024-12-05T10:01:20.864Z] Copying: 918/1024 [MB] (10 MBps) [2024-12-05T10:01:21.808Z] Copying: 934/1024 [MB] (16 MBps) [2024-12-05T10:01:22.750Z] Copying: 945/1024 [MB] (10 MBps) [2024-12-05T10:01:23.695Z] Copying: 973/1024 [MB] (27 MBps) [2024-12-05T10:01:24.641Z] Copying: 998/1024 [MB] (24 MBps) [2024-12-05T10:01:25.584Z] Copying: 1012/1024 [MB] (14 MBps) [2024-12-05T10:01:25.584Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-05 10:01:25.363781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.955 [2024-12-05 10:01:25.363846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:37.955 [2024-12-05 10:01:25.363862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:37.955 [2024-12-05 10:01:25.363870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.955 [2024-12-05 10:01:25.363893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:37.955 [2024-12-05 10:01:25.367106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.955 [2024-12-05 10:01:25.367156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:37.955 [2024-12-05 10:01:25.367169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:29:37.955 [2024-12-05 10:01:25.367177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.955 [2024-12-05 10:01:25.367395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.367407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:37.956 [2024-12-05 10:01:25.367416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:29:37.956 [2024-12-05 10:01:25.367425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.371705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.371748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:37.956 [2024-12-05 10:01:25.371759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.264 ms 00:29:37.956 [2024-12-05 10:01:25.371773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.377975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.378169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:37.956 [2024-12-05 10:01:25.378191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.179 ms 00:29:37.956 [2024-12-05 10:01:25.378200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.406706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.406754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:37.956 [2024-12-05 10:01:25.406767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.445 ms 00:29:37.956 [2024-12-05 10:01:25.406775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.422824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.422870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:37.956 [2024-12-05 10:01:25.422882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.003 ms 00:29:37.956 [2024-12-05 10:01:25.422890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.427608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.427655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:37.956 [2024-12-05 10:01:25.427667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.660 ms 00:29:37.956 [2024-12-05 10:01:25.427675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.453760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.453805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:37.956 [2024-12-05 10:01:25.453816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:29:37.956 [2024-12-05 10:01:25.453823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.478814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.478857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:37.956 [2024-12-05 10:01:25.478868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.947 ms 00:29:37.956 [2024-12-05 10:01:25.478875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.503459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.503504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:37.956 [2024-12-05 10:01:25.503530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.542 ms 00:29:37.956 [2024-12-05 10:01:25.503538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.528035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.956 [2024-12-05 10:01:25.528079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:37.956 [2024-12-05 10:01:25.528098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.427 ms 00:29:37.956 [2024-12-05 10:01:25.528106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.956 [2024-12-05 10:01:25.528148] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:37.956 [2024-12-05 10:01:25.528170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:37.956 [2024-12-05 10:01:25.528184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:37.956 [2024-12-05 10:01:25.528193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:37.956 [2024-12-05 10:01:25.528813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.528992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.529000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.529007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.529015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:37.957 [2024-12-05 10:01:25.529031] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:37.957 [2024-12-05 10:01:25.529039] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 72490c1c-2963-47bd-a5c9-10ef62f869eb 00:29:37.957 [2024-12-05 10:01:25.529047] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:37.957 [2024-12-05 10:01:25.529054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:37.957 [2024-12-05 10:01:25.529061] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:37.957 [2024-12-05 10:01:25.529069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:37.957 [2024-12-05 10:01:25.529085] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:37.957 [2024-12-05 10:01:25.529093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:37.957 [2024-12-05 10:01:25.529100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:37.957 [2024-12-05 10:01:25.529107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:37.957 [2024-12-05 10:01:25.529113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:37.957 [2024-12-05 10:01:25.529121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.957 [2024-12-05 10:01:25.529128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:37.957 [2024-12-05 10:01:25.529137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:29:37.957 [2024-12-05 10:01:25.529147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.957 [2024-12-05 10:01:25.542876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.957 [2024-12-05 10:01:25.542919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:37.957 [2024-12-05 10:01:25.542930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.709 ms 00:29:37.957 [2024-12-05 10:01:25.542938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.957 [2024-12-05 10:01:25.543332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.957 [2024-12-05 10:01:25.543350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:37.957 [2024-12-05 10:01:25.543359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:29:37.957 [2024-12-05 10:01:25.543366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.957 [2024-12-05 10:01:25.579606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.957 [2024-12-05 10:01:25.579652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:37.957 [2024-12-05 10:01:25.579665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.957 [2024-12-05 10:01:25.579674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.957 [2024-12-05 10:01:25.579735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.957 [2024-12-05 10:01:25.579750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:37.957 [2024-12-05 10:01:25.579760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.957 [2024-12-05 10:01:25.579768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.957 [2024-12-05 10:01:25.579854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.957 [2024-12-05 10:01:25.579866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:37.957 [2024-12-05 10:01:25.579875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.957 [2024-12-05 10:01:25.579883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.957 [2024-12-05 10:01:25.579899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.957 [2024-12-05 10:01:25.579909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:37.957 [2024-12-05 10:01:25.579920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.957 [2024-12-05 10:01:25.579929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.665089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.218 [2024-12-05 10:01:25.665144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:38.218 [2024-12-05 10:01:25.665158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.218 [2024-12-05 10:01:25.665167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.734677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.218 [2024-12-05 10:01:25.734916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:38.218 [2024-12-05 10:01:25.734937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.218 [2024-12-05 10:01:25.734946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.735012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.218 [2024-12-05 10:01:25.735022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:38.218 [2024-12-05 10:01:25.735032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.218 [2024-12-05 10:01:25.735041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.735104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.218 [2024-12-05 10:01:25.735115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:38.218 [2024-12-05 10:01:25.735124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.218 [2024-12-05 10:01:25.735135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.735241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.218 [2024-12-05 10:01:25.735251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:38.218 [2024-12-05 10:01:25.735260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.218 [2024-12-05 10:01:25.735269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.735303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.218 [2024-12-05 10:01:25.735313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:38.218 [2024-12-05 10:01:25.735321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.218 [2024-12-05 10:01:25.735330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.735374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.218 [2024-12-05 10:01:25.735385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:38.218 [2024-12-05 10:01:25.735394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.218 [2024-12-05 10:01:25.735402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.735449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.218 [2024-12-05 10:01:25.735460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:38.218 [2024-12-05 10:01:25.735470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.218 [2024-12-05 10:01:25.735480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.218 [2024-12-05 10:01:25.735645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.826 ms, result 0 00:29:39.162 00:29:39.162 00:29:39.162 10:01:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:41.711 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80223 00:29:41.711 Process with pid 80223 is not found 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80223 ']' 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80223 00:29:41.711 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80223) - No such process 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80223 is not found' 00:29:41.711 10:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:41.711 Remove shared memory files 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:41.711 ************************************ 00:29:41.711 END TEST ftl_dirty_shutdown 00:29:41.711 ************************************ 00:29:41.711 00:29:41.711 real 4m12.834s 00:29:41.711 user 4m40.385s 00:29:41.711 sys 0m25.746s 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.711 10:01:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:41.711 10:01:29 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:41.711 10:01:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:41.711 10:01:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.711 10:01:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:41.973 ************************************ 00:29:41.973 START TEST ftl_upgrade_shutdown 00:29:41.973 ************************************ 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:41.973 * Looking for test storage... 00:29:41.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.973 --rc genhtml_branch_coverage=1 00:29:41.973 --rc genhtml_function_coverage=1 00:29:41.973 --rc genhtml_legend=1 00:29:41.973 --rc geninfo_all_blocks=1 00:29:41.973 --rc geninfo_unexecuted_blocks=1 00:29:41.973 00:29:41.973 ' 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.973 --rc genhtml_branch_coverage=1 00:29:41.973 --rc genhtml_function_coverage=1 00:29:41.973 --rc genhtml_legend=1 00:29:41.973 --rc geninfo_all_blocks=1 00:29:41.973 --rc geninfo_unexecuted_blocks=1 00:29:41.973 00:29:41.973 ' 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.973 --rc genhtml_branch_coverage=1 00:29:41.973 --rc genhtml_function_coverage=1 00:29:41.973 --rc genhtml_legend=1 00:29:41.973 --rc geninfo_all_blocks=1 00:29:41.973 --rc geninfo_unexecuted_blocks=1 00:29:41.973 00:29:41.973 ' 00:29:41.973 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.973 --rc genhtml_branch_coverage=1 00:29:41.974 --rc genhtml_function_coverage=1 00:29:41.974 --rc genhtml_legend=1 00:29:41.974 --rc geninfo_all_blocks=1 00:29:41.974 --rc geninfo_unexecuted_blocks=1 00:29:41.974 00:29:41.974 ' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82925 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82925 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82925 ']' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.974 10:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.235 [2024-12-05 10:01:29.605777] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:42.235 [2024-12-05 10:01:29.606118] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82925 ] 00:29:42.235 [2024-12-05 10:01:29.771030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.497 [2024-12-05 10:01:29.897241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.069 10:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:43.070 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:43.332 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:43.332 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:43.332 10:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:43.332 10:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:43.332 10:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:43.332 10:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:43.332 10:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:43.332 10:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:43.593 10:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:43.593 { 00:29:43.593 "name": "basen1", 00:29:43.593 "aliases": [ 00:29:43.593 "f3c2d356-90f8-4725-accd-eeffad264fb1" 00:29:43.593 ], 00:29:43.593 "product_name": "NVMe disk", 00:29:43.593 "block_size": 4096, 00:29:43.593 "num_blocks": 1310720, 00:29:43.593 "uuid": "f3c2d356-90f8-4725-accd-eeffad264fb1", 00:29:43.593 "numa_id": -1, 00:29:43.593 "assigned_rate_limits": { 00:29:43.593 "rw_ios_per_sec": 0, 00:29:43.593 "rw_mbytes_per_sec": 0, 00:29:43.593 "r_mbytes_per_sec": 0, 00:29:43.593 "w_mbytes_per_sec": 0 00:29:43.593 }, 00:29:43.594 "claimed": true, 00:29:43.594 "claim_type": "read_many_write_one", 00:29:43.594 "zoned": false, 00:29:43.594 "supported_io_types": { 00:29:43.594 "read": true, 00:29:43.594 "write": true, 00:29:43.594 "unmap": true, 00:29:43.594 "flush": true, 00:29:43.594 "reset": true, 00:29:43.594 "nvme_admin": true, 00:29:43.594 "nvme_io": true, 00:29:43.594 "nvme_io_md": false, 00:29:43.594 "write_zeroes": true, 00:29:43.594 "zcopy": false, 00:29:43.594 "get_zone_info": false, 00:29:43.594 "zone_management": false, 00:29:43.594 "zone_append": false, 00:29:43.594 "compare": true, 00:29:43.594 "compare_and_write": false, 00:29:43.594 "abort": true, 00:29:43.594 "seek_hole": false, 00:29:43.594 "seek_data": false, 00:29:43.594 "copy": true, 00:29:43.594 "nvme_iov_md": false 00:29:43.594 }, 00:29:43.594 "driver_specific": { 00:29:43.594 "nvme": [ 00:29:43.594 { 00:29:43.594 "pci_address": "0000:00:11.0", 00:29:43.594 "trid": { 00:29:43.594 "trtype": "PCIe", 00:29:43.594 "traddr": "0000:00:11.0" 00:29:43.594 }, 00:29:43.594 "ctrlr_data": { 00:29:43.594 "cntlid": 0, 00:29:43.594 "vendor_id": "0x1b36", 00:29:43.594 "model_number": "QEMU NVMe Ctrl", 00:29:43.594 "serial_number": "12341", 00:29:43.594 "firmware_revision": "8.0.0", 00:29:43.594 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:43.594 "oacs": { 00:29:43.594 "security": 0, 00:29:43.594 "format": 1, 00:29:43.594 "firmware": 0, 00:29:43.594 "ns_manage": 1 00:29:43.594 }, 00:29:43.594 "multi_ctrlr": false, 00:29:43.594 "ana_reporting": false 00:29:43.594 }, 00:29:43.594 "vs": { 00:29:43.594 "nvme_version": "1.4" 00:29:43.594 }, 00:29:43.594 "ns_data": { 00:29:43.594 "id": 1, 00:29:43.594 "can_share": false 00:29:43.594 } 00:29:43.594 } 00:29:43.594 ], 00:29:43.594 "mp_policy": "active_passive" 00:29:43.594 } 00:29:43.594 } 00:29:43.594 ]' 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:43.594 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:43.856 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=8494fc65-f854-4003-a6cd-d8c625c4126a 00:29:43.856 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:43.856 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8494fc65-f854-4003-a6cd-d8c625c4126a 00:29:44.117 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:44.378 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c077cd9c-57e1-4387-8c75-16527a69c0b7 00:29:44.378 10:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c077cd9c-57e1-4387-8c75-16527a69c0b7 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d4750e61-10d9-4617-a807-a14df7554cf9 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d4750e61-10d9-4617-a807-a14df7554cf9 ]] 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d4750e61-10d9-4617-a807-a14df7554cf9 5120 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d4750e61-10d9-4617-a807-a14df7554cf9 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d4750e61-10d9-4617-a807-a14df7554cf9 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d4750e61-10d9-4617-a807-a14df7554cf9 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:44.640 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4750e61-10d9-4617-a807-a14df7554cf9 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:44.902 { 00:29:44.902 "name": "d4750e61-10d9-4617-a807-a14df7554cf9", 00:29:44.902 "aliases": [ 00:29:44.902 "lvs/basen1p0" 00:29:44.902 ], 00:29:44.902 "product_name": "Logical Volume", 00:29:44.902 "block_size": 4096, 00:29:44.902 "num_blocks": 5242880, 00:29:44.902 "uuid": "d4750e61-10d9-4617-a807-a14df7554cf9", 00:29:44.902 "assigned_rate_limits": { 00:29:44.902 "rw_ios_per_sec": 0, 00:29:44.902 "rw_mbytes_per_sec": 0, 00:29:44.902 "r_mbytes_per_sec": 0, 00:29:44.902 "w_mbytes_per_sec": 0 00:29:44.902 }, 00:29:44.902 "claimed": false, 00:29:44.902 "zoned": false, 00:29:44.902 "supported_io_types": { 00:29:44.902 "read": true, 00:29:44.902 "write": true, 00:29:44.902 "unmap": true, 00:29:44.902 "flush": false, 00:29:44.902 "reset": true, 00:29:44.902 "nvme_admin": false, 00:29:44.902 "nvme_io": false, 00:29:44.902 "nvme_io_md": false, 00:29:44.902 "write_zeroes": true, 00:29:44.902 "zcopy": false, 00:29:44.902 "get_zone_info": false, 00:29:44.902 "zone_management": false, 00:29:44.902 "zone_append": false, 00:29:44.902 "compare": false, 00:29:44.902 "compare_and_write": false, 00:29:44.902 "abort": false, 00:29:44.902 "seek_hole": true, 00:29:44.902 "seek_data": true, 00:29:44.902 "copy": false, 00:29:44.902 "nvme_iov_md": false 00:29:44.902 }, 00:29:44.902 "driver_specific": { 00:29:44.902 "lvol": { 00:29:44.902 "lvol_store_uuid": "c077cd9c-57e1-4387-8c75-16527a69c0b7", 00:29:44.902 "base_bdev": "basen1", 00:29:44.902 "thin_provision": true, 00:29:44.902 "num_allocated_clusters": 0, 00:29:44.902 "snapshot": false, 00:29:44.902 "clone": false, 00:29:44.902 "esnap_clone": false 00:29:44.902 } 00:29:44.902 } 00:29:44.902 } 00:29:44.902 ]' 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:44.902 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:45.163 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:45.163 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:45.163 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:45.425 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:45.425 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:45.425 10:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d4750e61-10d9-4617-a807-a14df7554cf9 -c cachen1p0 --l2p_dram_limit 2 00:29:45.425 [2024-12-05 10:01:33.028976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.425 [2024-12-05 10:01:33.029018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:45.425 [2024-12-05 10:01:33.029031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:45.425 [2024-12-05 10:01:33.029038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.425 [2024-12-05 10:01:33.029080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.425 [2024-12-05 10:01:33.029088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:45.425 [2024-12-05 10:01:33.029096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:45.425 [2024-12-05 10:01:33.029103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.425 [2024-12-05 10:01:33.029118] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:45.425 [2024-12-05 10:01:33.029690] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:45.426 [2024-12-05 10:01:33.029706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.029713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:45.426 [2024-12-05 10:01:33.029722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.589 ms 00:29:45.426 [2024-12-05 10:01:33.029728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.029751] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 4907301a-5f90-480a-83c6-a47abfc049a4 00:29:45.426 [2024-12-05 10:01:33.030703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.030727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:45.426 [2024-12-05 10:01:33.030734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:45.426 [2024-12-05 10:01:33.030741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.035360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.035392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:45.426 [2024-12-05 10:01:33.035399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.566 ms 00:29:45.426 [2024-12-05 10:01:33.035406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.035448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.035456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:45.426 [2024-12-05 10:01:33.035462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:45.426 [2024-12-05 10:01:33.035471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.035503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.035522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:45.426 [2024-12-05 10:01:33.035530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:45.426 [2024-12-05 10:01:33.035537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.035552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:45.426 [2024-12-05 10:01:33.038424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.038448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:45.426 [2024-12-05 10:01:33.038458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.873 ms 00:29:45.426 [2024-12-05 10:01:33.038463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.038484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.038491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:45.426 [2024-12-05 10:01:33.038500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:45.426 [2024-12-05 10:01:33.038506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.038532] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:45.426 [2024-12-05 10:01:33.038639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:45.426 [2024-12-05 10:01:33.038651] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:45.426 [2024-12-05 10:01:33.038660] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:45.426 [2024-12-05 10:01:33.038670] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:45.426 [2024-12-05 10:01:33.038677] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:45.426 [2024-12-05 10:01:33.038685] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:45.426 [2024-12-05 10:01:33.038691] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:45.426 [2024-12-05 10:01:33.038700] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:45.426 [2024-12-05 10:01:33.038705] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:45.426 [2024-12-05 10:01:33.038712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.038717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:45.426 [2024-12-05 10:01:33.038725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:29:45.426 [2024-12-05 10:01:33.038730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.038796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.426 [2024-12-05 10:01:33.038806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:45.426 [2024-12-05 10:01:33.038813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:45.426 [2024-12-05 10:01:33.038818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.426 [2024-12-05 10:01:33.038895] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:45.426 [2024-12-05 10:01:33.038903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:45.426 [2024-12-05 10:01:33.038910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:45.426 [2024-12-05 10:01:33.038916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.426 [2024-12-05 10:01:33.038923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:45.426 [2024-12-05 10:01:33.038929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:45.426 [2024-12-05 10:01:33.038935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:45.426 [2024-12-05 10:01:33.038940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:45.426 [2024-12-05 10:01:33.038946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:45.426 [2024-12-05 10:01:33.038951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.426 [2024-12-05 10:01:33.038958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:45.426 [2024-12-05 10:01:33.038964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:45.426 [2024-12-05 10:01:33.038970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.426 [2024-12-05 10:01:33.038975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:45.426 [2024-12-05 10:01:33.038981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:45.426 [2024-12-05 10:01:33.038987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.426 [2024-12-05 10:01:33.038995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:45.426 [2024-12-05 10:01:33.039000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:45.426 [2024-12-05 10:01:33.039006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.426 [2024-12-05 10:01:33.039012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:45.426 [2024-12-05 10:01:33.039018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:45.426 [2024-12-05 10:01:33.039023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:45.426 [2024-12-05 10:01:33.039029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:45.426 [2024-12-05 10:01:33.039034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:45.426 [2024-12-05 10:01:33.039040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:45.426 [2024-12-05 10:01:33.039045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:45.426 [2024-12-05 10:01:33.039051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:45.426 [2024-12-05 10:01:33.039056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:45.426 [2024-12-05 10:01:33.039062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:45.426 [2024-12-05 10:01:33.039067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:45.426 [2024-12-05 10:01:33.039073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:45.426 [2024-12-05 10:01:33.039078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:45.426 [2024-12-05 10:01:33.039085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:45.426 [2024-12-05 10:01:33.039090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.426 [2024-12-05 10:01:33.039096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:45.427 [2024-12-05 10:01:33.039101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:45.427 [2024-12-05 10:01:33.039109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.427 [2024-12-05 10:01:33.039114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:45.427 [2024-12-05 10:01:33.039120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:45.427 [2024-12-05 10:01:33.039125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.427 [2024-12-05 10:01:33.039131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:45.427 [2024-12-05 10:01:33.039136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:45.427 [2024-12-05 10:01:33.039142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.427 [2024-12-05 10:01:33.039146] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:45.427 [2024-12-05 10:01:33.039153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:45.427 [2024-12-05 10:01:33.039159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:45.427 [2024-12-05 10:01:33.039165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.427 [2024-12-05 10:01:33.039173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:45.427 [2024-12-05 10:01:33.039181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:45.427 [2024-12-05 10:01:33.039186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:45.427 [2024-12-05 10:01:33.039192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:45.427 [2024-12-05 10:01:33.039197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:45.427 [2024-12-05 10:01:33.039204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:45.427 [2024-12-05 10:01:33.039210] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:45.427 [2024-12-05 10:01:33.039220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:45.427 [2024-12-05 10:01:33.039233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:45.427 [2024-12-05 10:01:33.039254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:45.427 [2024-12-05 10:01:33.039261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:45.427 [2024-12-05 10:01:33.039266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:45.427 [2024-12-05 10:01:33.039273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:45.427 [2024-12-05 10:01:33.039317] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:45.427 [2024-12-05 10:01:33.039324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:45.427 [2024-12-05 10:01:33.039336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:45.427 [2024-12-05 10:01:33.039341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:45.427 [2024-12-05 10:01:33.039348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:45.427 [2024-12-05 10:01:33.039354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.427 [2024-12-05 10:01:33.039361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:45.427 [2024-12-05 10:01:33.039367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:29:45.427 [2024-12-05 10:01:33.039373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.427 [2024-12-05 10:01:33.039401] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:45.427 [2024-12-05 10:01:33.039411] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:49.674 [2024-12-05 10:01:36.633612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.633865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:49.674 [2024-12-05 10:01:36.633893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3594.193 ms 00:29:49.674 [2024-12-05 10:01:36.633906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.665661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.665726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:49.674 [2024-12-05 10:01:36.665740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.483 ms 00:29:49.674 [2024-12-05 10:01:36.665751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.665842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.665856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:49.674 [2024-12-05 10:01:36.665866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:49.674 [2024-12-05 10:01:36.665882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.701573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.701630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:49.674 [2024-12-05 10:01:36.701642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.655 ms 00:29:49.674 [2024-12-05 10:01:36.701654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.701691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.701704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:49.674 [2024-12-05 10:01:36.701714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:49.674 [2024-12-05 10:01:36.701724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.702315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.702345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:49.674 [2024-12-05 10:01:36.702363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:29:49.674 [2024-12-05 10:01:36.702373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.702419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.702431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:49.674 [2024-12-05 10:01:36.702443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:49.674 [2024-12-05 10:01:36.702455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.720187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.720240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:49.674 [2024-12-05 10:01:36.720251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.711 ms 00:29:49.674 [2024-12-05 10:01:36.720262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.746763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:49.674 [2024-12-05 10:01:36.748058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.748117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:49.674 [2024-12-05 10:01:36.748133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.704 ms 00:29:49.674 [2024-12-05 10:01:36.748142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.778359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.778597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:49.674 [2024-12-05 10:01:36.778630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.170 ms 00:29:49.674 [2024-12-05 10:01:36.778639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.778799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.778815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:49.674 [2024-12-05 10:01:36.778831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:29:49.674 [2024-12-05 10:01:36.778839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.804473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.804540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:49.674 [2024-12-05 10:01:36.804558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.572 ms 00:29:49.674 [2024-12-05 10:01:36.804567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.829956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.830160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:49.674 [2024-12-05 10:01:36.830188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.329 ms 00:29:49.674 [2024-12-05 10:01:36.830197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.831138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.831197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:49.674 [2024-12-05 10:01:36.831213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.574 ms 00:29:49.674 [2024-12-05 10:01:36.831225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.915978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.916035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:49.674 [2024-12-05 10:01:36.916056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 84.673 ms 00:29:49.674 [2024-12-05 10:01:36.916066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.674 [2024-12-05 10:01:36.943856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.674 [2024-12-05 10:01:36.944067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:49.674 [2024-12-05 10:01:36.944109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.679 ms 00:29:49.674 [2024-12-05 10:01:36.944119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.675 [2024-12-05 10:01:36.970118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.675 [2024-12-05 10:01:36.970177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:49.675 [2024-12-05 10:01:36.970195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.664 ms 00:29:49.675 [2024-12-05 10:01:36.970203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.675 [2024-12-05 10:01:36.996702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.675 [2024-12-05 10:01:36.996755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:49.675 [2024-12-05 10:01:36.996771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.443 ms 00:29:49.675 [2024-12-05 10:01:36.996780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.675 [2024-12-05 10:01:36.996838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.675 [2024-12-05 10:01:36.996848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:49.675 [2024-12-05 10:01:36.996863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:49.675 [2024-12-05 10:01:36.996871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.675 [2024-12-05 10:01:36.996967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.675 [2024-12-05 10:01:36.996980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:49.675 [2024-12-05 10:01:36.996991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:49.675 [2024-12-05 10:01:36.996999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.675 [2024-12-05 10:01:36.998248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3968.762 ms, result 0 00:29:49.675 { 00:29:49.675 "name": "ftl", 00:29:49.675 "uuid": "4907301a-5f90-480a-83c6-a47abfc049a4" 00:29:49.675 } 00:29:49.675 10:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:49.675 [2024-12-05 10:01:37.221315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.675 10:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:49.936 10:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:50.197 [2024-12-05 10:01:37.657829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:50.197 10:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:50.458 [2024-12-05 10:01:37.875157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:50.458 10:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:50.720 Fill FTL, iteration 1 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83047 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83047 /var/tmp/spdk.tgt.sock 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83047 ']' 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:50.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.720 10:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.720 [2024-12-05 10:01:38.331587] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:50.720 [2024-12-05 10:01:38.332442] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83047 ] 00:29:50.981 [2024-12-05 10:01:38.495429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.981 [2024-12-05 10:01:38.604268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.923 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.923 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:51.923 10:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:51.923 ftln1 00:29:51.923 10:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:51.923 10:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83047 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83047 ']' 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83047 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83047 00:29:52.184 killing process with pid 83047 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83047' 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83047 00:29:52.184 10:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83047 00:29:53.568 10:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:53.568 10:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:53.827 [2024-12-05 10:01:41.226623] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:53.827 [2024-12-05 10:01:41.226737] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83096 ] 00:29:53.827 [2024-12-05 10:01:41.382104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.084 [2024-12-05 10:01:41.465895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.458  [2024-12-05T10:01:44.022Z] Copying: 258/1024 [MB] (258 MBps) [2024-12-05T10:01:44.958Z] Copying: 492/1024 [MB] (234 MBps) [2024-12-05T10:01:45.891Z] Copying: 719/1024 [MB] (227 MBps) [2024-12-05T10:01:46.148Z] Copying: 961/1024 [MB] (242 MBps) [2024-12-05T10:01:46.716Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:29:59.087 00:29:59.346 Calculate MD5 checksum, iteration 1 00:29:59.346 10:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:59.346 10:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:59.346 10:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:59.346 10:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:59.346 10:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:59.346 10:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:59.346 10:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:59.346 10:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:59.346 [2024-12-05 10:01:46.785913] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:59.346 [2024-12-05 10:01:46.786050] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83153 ] 00:29:59.346 [2024-12-05 10:01:46.945722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.604 [2024-12-05 10:01:47.036879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.980  [2024-12-05T10:01:49.176Z] Copying: 626/1024 [MB] (626 MBps) [2024-12-05T10:01:49.744Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:30:02.115 00:30:02.115 10:01:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:02.115 10:01:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:04.661 Fill FTL, iteration 2 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=86ac81660784687901b27160df4e2860 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.661 10:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:04.661 [2024-12-05 10:01:51.790115] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:04.661 [2024-12-05 10:01:51.790229] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83210 ] 00:30:04.661 [2024-12-05 10:01:51.951059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.661 [2024-12-05 10:01:52.056833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.044  [2024-12-05T10:01:54.608Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-05T10:01:55.542Z] Copying: 396/1024 [MB] (200 MBps) [2024-12-05T10:01:56.476Z] Copying: 640/1024 [MB] (244 MBps) [2024-12-05T10:01:57.411Z] Copying: 866/1024 [MB] (226 MBps) [2024-12-05T10:01:57.979Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:30:10.350 00:30:10.350 Calculate MD5 checksum, iteration 2 00:30:10.350 10:01:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:10.350 10:01:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:10.350 10:01:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:10.350 10:01:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:10.350 10:01:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:10.350 10:01:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:10.350 10:01:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:10.350 10:01:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:10.350 [2024-12-05 10:01:57.800585] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:10.350 [2024-12-05 10:01:57.800703] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83274 ] 00:30:10.350 [2024-12-05 10:01:57.956748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.609 [2024-12-05 10:01:58.039285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.986  [2024-12-05T10:02:00.180Z] Copying: 661/1024 [MB] (661 MBps) [2024-12-05T10:02:01.118Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:30:13.489 00:30:13.489 10:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:13.489 10:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:16.035 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:16.035 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=081c828a6dd46c4fbd5fdafa40128104 00:30:16.035 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:16.035 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:16.035 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:16.035 [2024-12-05 10:02:03.333359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.035 [2024-12-05 10:02:03.333492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:16.035 [2024-12-05 10:02:03.333521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.035 [2024-12-05 10:02:03.333529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.035 [2024-12-05 10:02:03.333552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.035 [2024-12-05 10:02:03.333562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:16.035 [2024-12-05 10:02:03.333570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:16.035 [2024-12-05 10:02:03.333576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.035 [2024-12-05 10:02:03.333591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.035 [2024-12-05 10:02:03.333598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:16.035 [2024-12-05 10:02:03.333604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.035 [2024-12-05 10:02:03.333609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.035 [2024-12-05 10:02:03.333662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.287 ms, result 0 00:30:16.035 true 00:30:16.035 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:16.035 { 00:30:16.035 "name": "ftl", 00:30:16.035 "properties": [ 00:30:16.035 { 00:30:16.035 "name": "superblock_version", 00:30:16.035 "value": 5, 00:30:16.035 "read-only": true 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "name": "base_device", 00:30:16.035 "bands": [ 00:30:16.035 { 00:30:16.035 "id": 0, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 1, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 2, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 3, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 4, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 5, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 6, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 7, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 8, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 9, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 10, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 11, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 12, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 13, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 14, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 15, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 16, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 17, 00:30:16.035 "state": "FREE", 00:30:16.035 "validity": 0.0 00:30:16.035 } 00:30:16.035 ], 00:30:16.035 "read-only": true 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "name": "cache_device", 00:30:16.035 "type": "bdev", 00:30:16.035 "chunks": [ 00:30:16.035 { 00:30:16.035 "id": 0, 00:30:16.035 "state": "INACTIVE", 00:30:16.035 "utilization": 0.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 1, 00:30:16.035 "state": "CLOSED", 00:30:16.035 "utilization": 1.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 2, 00:30:16.035 "state": "CLOSED", 00:30:16.035 "utilization": 1.0 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 3, 00:30:16.035 "state": "OPEN", 00:30:16.035 "utilization": 0.001953125 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "id": 4, 00:30:16.035 "state": "OPEN", 00:30:16.035 "utilization": 0.0 00:30:16.035 } 00:30:16.035 ], 00:30:16.035 "read-only": true 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "name": "verbose_mode", 00:30:16.035 "value": true, 00:30:16.035 "unit": "", 00:30:16.035 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:16.035 }, 00:30:16.035 { 00:30:16.035 "name": "prep_upgrade_on_shutdown", 00:30:16.035 "value": false, 00:30:16.035 "unit": "", 00:30:16.035 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:16.035 } 00:30:16.035 ] 00:30:16.035 } 00:30:16.035 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:16.296 [2024-12-05 10:02:03.701653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.296 [2024-12-05 10:02:03.701777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:16.296 [2024-12-05 10:02:03.701826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.296 [2024-12-05 10:02:03.701844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.296 [2024-12-05 10:02:03.701876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.296 [2024-12-05 10:02:03.701893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:16.296 [2024-12-05 10:02:03.701909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:16.296 [2024-12-05 10:02:03.701923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.296 [2024-12-05 10:02:03.701980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.296 [2024-12-05 10:02:03.701999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:16.296 [2024-12-05 10:02:03.702014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.296 [2024-12-05 10:02:03.702028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.296 [2024-12-05 10:02:03.702084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.418 ms, result 0 00:30:16.296 true 00:30:16.296 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:16.296 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:16.296 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:16.557 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:16.557 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:16.557 10:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:16.557 [2024-12-05 10:02:04.109983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.557 [2024-12-05 10:02:04.110111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:16.557 [2024-12-05 10:02:04.110125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.557 [2024-12-05 10:02:04.110132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.557 [2024-12-05 10:02:04.110152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.557 [2024-12-05 10:02:04.110159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:16.557 [2024-12-05 10:02:04.110165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:16.557 [2024-12-05 10:02:04.110171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.557 [2024-12-05 10:02:04.110185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.557 [2024-12-05 10:02:04.110191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:16.557 [2024-12-05 10:02:04.110197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.557 [2024-12-05 10:02:04.110202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.557 [2024-12-05 10:02:04.110249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.254 ms, result 0 00:30:16.557 true 00:30:16.557 10:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:16.819 { 00:30:16.819 "name": "ftl", 00:30:16.819 "properties": [ 00:30:16.819 { 00:30:16.819 "name": "superblock_version", 00:30:16.819 "value": 5, 00:30:16.819 "read-only": true 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "name": "base_device", 00:30:16.819 "bands": [ 00:30:16.819 { 00:30:16.819 "id": 0, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 1, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 2, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 3, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 4, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 5, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 6, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 7, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 8, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 9, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 10, 00:30:16.819 "state": "FREE", 00:30:16.819 "validity": 0.0 00:30:16.819 }, 00:30:16.819 { 00:30:16.819 "id": 11, 00:30:16.819 "state": "FREE", 00:30:16.820 "validity": 0.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 12, 00:30:16.820 "state": "FREE", 00:30:16.820 "validity": 0.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 13, 00:30:16.820 "state": "FREE", 00:30:16.820 "validity": 0.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 14, 00:30:16.820 "state": "FREE", 00:30:16.820 "validity": 0.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 15, 00:30:16.820 "state": "FREE", 00:30:16.820 "validity": 0.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 16, 00:30:16.820 "state": "FREE", 00:30:16.820 "validity": 0.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 17, 00:30:16.820 "state": "FREE", 00:30:16.820 "validity": 0.0 00:30:16.820 } 00:30:16.820 ], 00:30:16.820 "read-only": true 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "name": "cache_device", 00:30:16.820 "type": "bdev", 00:30:16.820 "chunks": [ 00:30:16.820 { 00:30:16.820 "id": 0, 00:30:16.820 "state": "INACTIVE", 00:30:16.820 "utilization": 0.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 1, 00:30:16.820 "state": "CLOSED", 00:30:16.820 "utilization": 1.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 2, 00:30:16.820 "state": "CLOSED", 00:30:16.820 "utilization": 1.0 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 3, 00:30:16.820 "state": "OPEN", 00:30:16.820 "utilization": 0.001953125 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "id": 4, 00:30:16.820 "state": "OPEN", 00:30:16.820 "utilization": 0.0 00:30:16.820 } 00:30:16.820 ], 00:30:16.820 "read-only": true 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "name": "verbose_mode", 00:30:16.820 "value": true, 00:30:16.820 "unit": "", 00:30:16.820 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:16.820 }, 00:30:16.820 { 00:30:16.820 "name": "prep_upgrade_on_shutdown", 00:30:16.820 "value": true, 00:30:16.820 "unit": "", 00:30:16.820 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:16.820 } 00:30:16.820 ] 00:30:16.820 } 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82925 ]] 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82925 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82925 ']' 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82925 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82925 00:30:16.820 killing process with pid 82925 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82925' 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82925 00:30:16.820 10:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82925 00:30:17.392 [2024-12-05 10:02:04.877319] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:17.392 [2024-12-05 10:02:04.888796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.392 [2024-12-05 10:02:04.888827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:17.392 [2024-12-05 10:02:04.888837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:17.392 [2024-12-05 10:02:04.888843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.392 [2024-12-05 10:02:04.888860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:17.392 [2024-12-05 10:02:04.890888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.392 [2024-12-05 10:02:04.890909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:17.392 [2024-12-05 10:02:04.890917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.018 ms 00:30:17.392 [2024-12-05 10:02:04.890927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.457 [2024-12-05 10:02:13.311043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.457 [2024-12-05 10:02:13.311285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:27.457 [2024-12-05 10:02:13.311531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8420.062 ms 00:30:27.457 [2024-12-05 10:02:13.311577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.457 [2024-12-05 10:02:13.313224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.457 [2024-12-05 10:02:13.313366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:27.457 [2024-12-05 10:02:13.313430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.604 ms 00:30:27.457 [2024-12-05 10:02:13.313456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.457 [2024-12-05 10:02:13.314699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.457 [2024-12-05 10:02:13.314852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:27.457 [2024-12-05 10:02:13.314922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:30:27.457 [2024-12-05 10:02:13.314956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.457 [2024-12-05 10:02:13.326368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.457 [2024-12-05 10:02:13.326547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:27.457 [2024-12-05 10:02:13.326628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.354 ms 00:30:27.457 [2024-12-05 10:02:13.326652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.457 [2024-12-05 10:02:13.334173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.457 [2024-12-05 10:02:13.334338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:27.457 [2024-12-05 10:02:13.334405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.473 ms 00:30:27.457 [2024-12-05 10:02:13.334429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.457 [2024-12-05 10:02:13.334598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.457 [2024-12-05 10:02:13.334752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:27.458 [2024-12-05 10:02:13.334781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.090 ms 00:30:27.458 [2024-12-05 10:02:13.334802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.345447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.458 [2024-12-05 10:02:13.345619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:27.458 [2024-12-05 10:02:13.345678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.614 ms 00:30:27.458 [2024-12-05 10:02:13.345701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.356009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.458 [2024-12-05 10:02:13.356178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:27.458 [2024-12-05 10:02:13.356197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.260 ms 00:30:27.458 [2024-12-05 10:02:13.356205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.366516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.458 [2024-12-05 10:02:13.366676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:27.458 [2024-12-05 10:02:13.366734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.264 ms 00:30:27.458 [2024-12-05 10:02:13.366756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.386305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.458 [2024-12-05 10:02:13.386492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:27.458 [2024-12-05 10:02:13.386945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.095 ms 00:30:27.458 [2024-12-05 10:02:13.386997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.387071] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:27.458 [2024-12-05 10:02:13.387208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.458 [2024-12-05 10:02:13.387732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.458 [2024-12-05 10:02:13.387755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:27.458 [2024-12-05 10:02:13.387765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.458 [2024-12-05 10:02:13.387889] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:27.458 [2024-12-05 10:02:13.387898] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4907301a-5f90-480a-83c6-a47abfc049a4 00:30:27.458 [2024-12-05 10:02:13.387906] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:27.458 [2024-12-05 10:02:13.387914] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:27.458 [2024-12-05 10:02:13.387920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:27.458 [2024-12-05 10:02:13.387929] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:27.458 [2024-12-05 10:02:13.387943] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:27.458 [2024-12-05 10:02:13.387952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:27.458 [2024-12-05 10:02:13.387963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:27.458 [2024-12-05 10:02:13.387970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:27.458 [2024-12-05 10:02:13.387976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:27.458 [2024-12-05 10:02:13.387986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.458 [2024-12-05 10:02:13.387995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:27.458 [2024-12-05 10:02:13.388004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.916 ms 00:30:27.458 [2024-12-05 10:02:13.388012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.401958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.458 [2024-12-05 10:02:13.402129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:27.458 [2024-12-05 10:02:13.402154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.910 ms 00:30:27.458 [2024-12-05 10:02:13.402163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.402592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.458 [2024-12-05 10:02:13.402607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:27.458 [2024-12-05 10:02:13.402618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:30:27.458 [2024-12-05 10:02:13.402626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.449420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.458 [2024-12-05 10:02:13.449608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.458 [2024-12-05 10:02:13.449628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.458 [2024-12-05 10:02:13.449638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.449677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.458 [2024-12-05 10:02:13.449686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.458 [2024-12-05 10:02:13.449695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.458 [2024-12-05 10:02:13.449703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.449783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.458 [2024-12-05 10:02:13.449795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.458 [2024-12-05 10:02:13.449810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.458 [2024-12-05 10:02:13.449818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.449836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.458 [2024-12-05 10:02:13.449845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.458 [2024-12-05 10:02:13.449853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.458 [2024-12-05 10:02:13.449862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.534846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.458 [2024-12-05 10:02:13.534901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:27.458 [2024-12-05 10:02:13.534920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.458 [2024-12-05 10:02:13.534929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.458 [2024-12-05 10:02:13.606081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.459 [2024-12-05 10:02:13.606137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:27.459 [2024-12-05 10:02:13.606150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.459 [2024-12-05 10:02:13.606159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.459 [2024-12-05 10:02:13.606261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.459 [2024-12-05 10:02:13.606273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:27.459 [2024-12-05 10:02:13.606282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.459 [2024-12-05 10:02:13.606298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.459 [2024-12-05 10:02:13.606344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.459 [2024-12-05 10:02:13.606353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:27.459 [2024-12-05 10:02:13.606362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.459 [2024-12-05 10:02:13.606370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.459 [2024-12-05 10:02:13.606471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.459 [2024-12-05 10:02:13.606481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:27.459 [2024-12-05 10:02:13.606490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.459 [2024-12-05 10:02:13.606498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.459 [2024-12-05 10:02:13.606565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.459 [2024-12-05 10:02:13.606577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:27.459 [2024-12-05 10:02:13.606586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.459 [2024-12-05 10:02:13.606594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.459 [2024-12-05 10:02:13.606638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.459 [2024-12-05 10:02:13.606648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:27.459 [2024-12-05 10:02:13.606658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.459 [2024-12-05 10:02:13.606667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.459 [2024-12-05 10:02:13.606720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.459 [2024-12-05 10:02:13.606731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:27.459 [2024-12-05 10:02:13.606739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.459 [2024-12-05 10:02:13.606747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.459 [2024-12-05 10:02:13.606886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8718.016 ms, result 0 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83472 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83472 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83472 ']' 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:31.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.662 10:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:31.662 [2024-12-05 10:02:18.888060] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:31.662 [2024-12-05 10:02:18.888223] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83472 ] 00:30:31.662 [2024-12-05 10:02:19.050694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.662 [2024-12-05 10:02:19.174332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.604 [2024-12-05 10:02:19.931799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:32.604 [2024-12-05 10:02:19.931865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:32.604 [2024-12-05 10:02:20.076642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.076679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:32.604 [2024-12-05 10:02:20.076690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:32.604 [2024-12-05 10:02:20.076696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.076736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.076744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:32.604 [2024-12-05 10:02:20.076751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:32.604 [2024-12-05 10:02:20.076756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.076773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:32.604 [2024-12-05 10:02:20.077313] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:32.604 [2024-12-05 10:02:20.077324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.077331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:32.604 [2024-12-05 10:02:20.077337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.557 ms 00:30:32.604 [2024-12-05 10:02:20.077343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.078323] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:32.604 [2024-12-05 10:02:20.087863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.087891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:32.604 [2024-12-05 10:02:20.087904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.541 ms 00:30:32.604 [2024-12-05 10:02:20.087910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.087956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.087964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:32.604 [2024-12-05 10:02:20.087971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:32.604 [2024-12-05 10:02:20.087976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.092483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.092521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:32.604 [2024-12-05 10:02:20.092529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.457 ms 00:30:32.604 [2024-12-05 10:02:20.092535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.092580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.092587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:32.604 [2024-12-05 10:02:20.092594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:32.604 [2024-12-05 10:02:20.092599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.092634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.092643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:32.604 [2024-12-05 10:02:20.092649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:32.604 [2024-12-05 10:02:20.092655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.092670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:32.604 [2024-12-05 10:02:20.095239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.095360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:32.604 [2024-12-05 10:02:20.095372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.572 ms 00:30:32.604 [2024-12-05 10:02:20.095382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.095408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.095415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:32.604 [2024-12-05 10:02:20.095421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:32.604 [2024-12-05 10:02:20.095427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.095443] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:32.604 [2024-12-05 10:02:20.095521] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:32.604 [2024-12-05 10:02:20.095549] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:32.604 [2024-12-05 10:02:20.095560] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:32.604 [2024-12-05 10:02:20.095639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:32.604 [2024-12-05 10:02:20.095647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:32.604 [2024-12-05 10:02:20.095655] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:32.604 [2024-12-05 10:02:20.095663] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:32.604 [2024-12-05 10:02:20.095669] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:32.604 [2024-12-05 10:02:20.095678] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:32.604 [2024-12-05 10:02:20.095684] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:32.604 [2024-12-05 10:02:20.095689] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:32.604 [2024-12-05 10:02:20.095694] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:32.604 [2024-12-05 10:02:20.095700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.095706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:32.604 [2024-12-05 10:02:20.095712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:30:32.604 [2024-12-05 10:02:20.095717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.095781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.604 [2024-12-05 10:02:20.095788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:32.604 [2024-12-05 10:02:20.095795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:32.604 [2024-12-05 10:02:20.095800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.604 [2024-12-05 10:02:20.095875] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:32.604 [2024-12-05 10:02:20.095882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:32.604 [2024-12-05 10:02:20.095888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:32.604 [2024-12-05 10:02:20.095894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.604 [2024-12-05 10:02:20.095900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:32.604 [2024-12-05 10:02:20.095905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:32.604 [2024-12-05 10:02:20.095910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:32.604 [2024-12-05 10:02:20.095915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:32.604 [2024-12-05 10:02:20.095921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:32.605 [2024-12-05 10:02:20.095926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.095931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:32.605 [2024-12-05 10:02:20.095936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:32.605 [2024-12-05 10:02:20.095941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.095947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:32.605 [2024-12-05 10:02:20.095952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:32.605 [2024-12-05 10:02:20.095958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.095963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:32.605 [2024-12-05 10:02:20.095968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:32.605 [2024-12-05 10:02:20.095973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.095978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:32.605 [2024-12-05 10:02:20.095983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:32.605 [2024-12-05 10:02:20.095988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:32.605 [2024-12-05 10:02:20.095993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:32.605 [2024-12-05 10:02:20.096002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:32.605 [2024-12-05 10:02:20.096007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:32.605 [2024-12-05 10:02:20.096012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:32.605 [2024-12-05 10:02:20.096017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:32.605 [2024-12-05 10:02:20.096021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:32.605 [2024-12-05 10:02:20.096027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:32.605 [2024-12-05 10:02:20.096032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:32.605 [2024-12-05 10:02:20.096036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:32.605 [2024-12-05 10:02:20.096041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:32.605 [2024-12-05 10:02:20.096046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:32.605 [2024-12-05 10:02:20.096051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.096056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:32.605 [2024-12-05 10:02:20.096061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:32.605 [2024-12-05 10:02:20.096066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.096071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:32.605 [2024-12-05 10:02:20.096075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:32.605 [2024-12-05 10:02:20.096080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.096085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:32.605 [2024-12-05 10:02:20.096090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:32.605 [2024-12-05 10:02:20.096095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.096100] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:32.605 [2024-12-05 10:02:20.096114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:32.605 [2024-12-05 10:02:20.096121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:32.605 [2024-12-05 10:02:20.096127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:32.605 [2024-12-05 10:02:20.096134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:32.605 [2024-12-05 10:02:20.096140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:32.605 [2024-12-05 10:02:20.096145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:32.605 [2024-12-05 10:02:20.096151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:32.605 [2024-12-05 10:02:20.096156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:32.605 [2024-12-05 10:02:20.096161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:32.605 [2024-12-05 10:02:20.096168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:32.605 [2024-12-05 10:02:20.096174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:32.605 [2024-12-05 10:02:20.096187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:32.605 [2024-12-05 10:02:20.096204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:32.605 [2024-12-05 10:02:20.096209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:32.605 [2024-12-05 10:02:20.096214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:32.605 [2024-12-05 10:02:20.096220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:32.605 [2024-12-05 10:02:20.096258] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:32.605 [2024-12-05 10:02:20.096264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:32.605 [2024-12-05 10:02:20.096276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:32.605 [2024-12-05 10:02:20.096281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:32.605 [2024-12-05 10:02:20.096287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:32.605 [2024-12-05 10:02:20.096292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:32.605 [2024-12-05 10:02:20.096297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:32.605 [2024-12-05 10:02:20.096304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:30:32.605 [2024-12-05 10:02:20.096309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:32.605 [2024-12-05 10:02:20.096339] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:32.605 [2024-12-05 10:02:20.096347] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:35.934 [2024-12-05 10:02:23.271244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.271331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:35.934 [2024-12-05 10:02:23.271351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3174.889 ms 00:30:35.934 [2024-12-05 10:02:23.271360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.302212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.302459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:35.934 [2024-12-05 10:02:23.302482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.584 ms 00:30:35.934 [2024-12-05 10:02:23.302491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.302609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.302629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:35.934 [2024-12-05 10:02:23.302640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:35.934 [2024-12-05 10:02:23.302649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.337680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.337725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:35.934 [2024-12-05 10:02:23.337742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.967 ms 00:30:35.934 [2024-12-05 10:02:23.337751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.337792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.337803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:35.934 [2024-12-05 10:02:23.337812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:35.934 [2024-12-05 10:02:23.337821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.338358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.338380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:35.934 [2024-12-05 10:02:23.338391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.480 ms 00:30:35.934 [2024-12-05 10:02:23.338400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.338452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.338462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:35.934 [2024-12-05 10:02:23.338471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:35.934 [2024-12-05 10:02:23.338480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.355936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.355979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:35.934 [2024-12-05 10:02:23.355990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.430 ms 00:30:35.934 [2024-12-05 10:02:23.355998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.381957] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:35.934 [2024-12-05 10:02:23.382158] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:35.934 [2024-12-05 10:02:23.382183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.382193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:35.934 [2024-12-05 10:02:23.382205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.058 ms 00:30:35.934 [2024-12-05 10:02:23.382213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.397137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.397322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:35.934 [2024-12-05 10:02:23.397343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.783 ms 00:30:35.934 [2024-12-05 10:02:23.397352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.409845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.409904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:35.934 [2024-12-05 10:02:23.409916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.369 ms 00:30:35.934 [2024-12-05 10:02:23.409923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.422387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.422432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:35.934 [2024-12-05 10:02:23.422444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.413 ms 00:30:35.934 [2024-12-05 10:02:23.422451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.423106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.423139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:35.934 [2024-12-05 10:02:23.423150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:30:35.934 [2024-12-05 10:02:23.423159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.487172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.487232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:35.934 [2024-12-05 10:02:23.487247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 63.989 ms 00:30:35.934 [2024-12-05 10:02:23.487257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.498759] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:35.934 [2024-12-05 10:02:23.499938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.499981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:35.934 [2024-12-05 10:02:23.499993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.624 ms 00:30:35.934 [2024-12-05 10:02:23.500001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.500098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.500127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:35.934 [2024-12-05 10:02:23.500138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:35.934 [2024-12-05 10:02:23.500146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.500205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.500216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:35.934 [2024-12-05 10:02:23.500225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:35.934 [2024-12-05 10:02:23.500233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.500260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.500270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:35.934 [2024-12-05 10:02:23.500282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:35.934 [2024-12-05 10:02:23.500290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.500326] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:35.934 [2024-12-05 10:02:23.500336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.500345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:35.934 [2024-12-05 10:02:23.500354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:35.934 [2024-12-05 10:02:23.500362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.525166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.525220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:35.934 [2024-12-05 10:02:23.525233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.778 ms 00:30:35.934 [2024-12-05 10:02:23.525241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.525334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.934 [2024-12-05 10:02:23.525344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:35.934 [2024-12-05 10:02:23.525354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:35.934 [2024-12-05 10:02:23.525363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.934 [2024-12-05 10:02:23.526673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3449.485 ms, result 0 00:30:35.934 [2024-12-05 10:02:23.541618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.934 [2024-12-05 10:02:23.557601] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:36.196 [2024-12-05 10:02:23.565770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:36.196 10:02:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.196 10:02:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:36.196 10:02:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:36.196 10:02:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:36.196 10:02:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:36.196 [2024-12-05 10:02:23.809788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.196 [2024-12-05 10:02:23.809840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:36.196 [2024-12-05 10:02:23.809857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:36.196 [2024-12-05 10:02:23.809865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.196 [2024-12-05 10:02:23.809890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.196 [2024-12-05 10:02:23.809899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:36.196 [2024-12-05 10:02:23.809907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:36.196 [2024-12-05 10:02:23.809915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.196 [2024-12-05 10:02:23.809935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.196 [2024-12-05 10:02:23.809944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:36.196 [2024-12-05 10:02:23.809953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:36.196 [2024-12-05 10:02:23.809960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.196 [2024-12-05 10:02:23.810019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.223 ms, result 0 00:30:36.196 true 00:30:36.458 10:02:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:36.458 { 00:30:36.458 "name": "ftl", 00:30:36.458 "properties": [ 00:30:36.458 { 00:30:36.458 "name": "superblock_version", 00:30:36.458 "value": 5, 00:30:36.458 "read-only": true 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "name": "base_device", 00:30:36.458 "bands": [ 00:30:36.458 { 00:30:36.458 "id": 0, 00:30:36.458 "state": "CLOSED", 00:30:36.458 "validity": 1.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 1, 00:30:36.458 "state": "CLOSED", 00:30:36.458 "validity": 1.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 2, 00:30:36.458 "state": "CLOSED", 00:30:36.458 "validity": 0.007843137254901933 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 3, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 4, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 5, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 6, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 7, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 8, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 9, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 10, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 11, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 12, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 13, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 14, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 15, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 16, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 17, 00:30:36.458 "state": "FREE", 00:30:36.458 "validity": 0.0 00:30:36.458 } 00:30:36.458 ], 00:30:36.458 "read-only": true 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "name": "cache_device", 00:30:36.458 "type": "bdev", 00:30:36.458 "chunks": [ 00:30:36.458 { 00:30:36.458 "id": 0, 00:30:36.458 "state": "INACTIVE", 00:30:36.458 "utilization": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 1, 00:30:36.458 "state": "OPEN", 00:30:36.458 "utilization": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 2, 00:30:36.458 "state": "OPEN", 00:30:36.458 "utilization": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 3, 00:30:36.458 "state": "FREE", 00:30:36.458 "utilization": 0.0 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "id": 4, 00:30:36.458 "state": "FREE", 00:30:36.458 "utilization": 0.0 00:30:36.458 } 00:30:36.458 ], 00:30:36.458 "read-only": true 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "name": "verbose_mode", 00:30:36.458 "value": true, 00:30:36.458 "unit": "", 00:30:36.458 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "name": "prep_upgrade_on_shutdown", 00:30:36.458 "value": false, 00:30:36.458 "unit": "", 00:30:36.458 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:36.458 } 00:30:36.458 ] 00:30:36.458 } 00:30:36.458 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:36.458 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:36.458 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:36.719 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:36.719 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:36.719 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:36.719 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:36.719 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:36.978 Validate MD5 checksum, iteration 1 00:30:36.978 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:36.979 10:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.979 [2024-12-05 10:02:24.567675] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:36.979 [2024-12-05 10:02:24.568053] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83552 ] 00:30:37.238 [2024-12-05 10:02:24.733776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.238 [2024-12-05 10:02:24.856302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.154  [2024-12-05T10:02:27.355Z] Copying: 540/1024 [MB] (540 MBps) [2024-12-05T10:02:29.271Z] Copying: 1024/1024 [MB] (average 549 MBps) 00:30:41.642 00:30:41.642 10:02:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:41.642 10:02:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:44.200 Validate MD5 checksum, iteration 2 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=86ac81660784687901b27160df4e2860 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 86ac81660784687901b27160df4e2860 != \8\6\a\c\8\1\6\6\0\7\8\4\6\8\7\9\0\1\b\2\7\1\6\0\d\f\4\e\2\8\6\0 ]] 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:44.200 10:02:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:44.200 [2024-12-05 10:02:31.444760] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:44.200 [2024-12-05 10:02:31.445051] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83625 ] 00:30:44.200 [2024-12-05 10:02:31.604105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.200 [2024-12-05 10:02:31.709018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.108  [2024-12-05T10:02:33.994Z] Copying: 649/1024 [MB] (649 MBps) [2024-12-05T10:02:34.933Z] Copying: 1024/1024 [MB] (average 588 MBps) 00:30:47.304 00:30:47.305 10:02:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:47.305 10:02:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:49.846 10:02:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=081c828a6dd46c4fbd5fdafa40128104 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 081c828a6dd46c4fbd5fdafa40128104 != \0\8\1\c\8\2\8\a\6\d\d\4\6\c\4\f\b\d\5\f\d\a\f\a\4\0\1\2\8\1\0\4 ]] 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83472 ]] 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83472 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83686 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83686 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83686 ']' 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.846 10:02:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:49.846 [2024-12-05 10:02:37.084307] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:49.846 [2024-12-05 10:02:37.084424] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83686 ] 00:30:49.846 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83472 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:49.846 [2024-12-05 10:02:37.247179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.846 [2024-12-05 10:02:37.367925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.790 [2024-12-05 10:02:38.118813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:50.790 [2024-12-05 10:02:38.119170] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:50.790 [2024-12-05 10:02:38.272446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.272528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:50.790 [2024-12-05 10:02:38.272545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:50.790 [2024-12-05 10:02:38.272554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.272622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.272646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:50.790 [2024-12-05 10:02:38.272655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:50.790 [2024-12-05 10:02:38.272664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.272692] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:50.790 [2024-12-05 10:02:38.273558] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:50.790 [2024-12-05 10:02:38.273607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.273617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:50.790 [2024-12-05 10:02:38.273626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.925 ms 00:30:50.790 [2024-12-05 10:02:38.273634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.273996] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:50.790 [2024-12-05 10:02:38.292673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.292727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:50.790 [2024-12-05 10:02:38.292740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.677 ms 00:30:50.790 [2024-12-05 10:02:38.292749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.302461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.302535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:50.790 [2024-12-05 10:02:38.302548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:30:50.790 [2024-12-05 10:02:38.302556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.302904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.302918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:50.790 [2024-12-05 10:02:38.302928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:30:50.790 [2024-12-05 10:02:38.302937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.302994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.303005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:50.790 [2024-12-05 10:02:38.303013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:50.790 [2024-12-05 10:02:38.303022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.303046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.303056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:50.790 [2024-12-05 10:02:38.303064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:50.790 [2024-12-05 10:02:38.303072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.303094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:50.790 [2024-12-05 10:02:38.306422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.306464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:50.790 [2024-12-05 10:02:38.306474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.333 ms 00:30:50.790 [2024-12-05 10:02:38.306482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.306539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.306549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:50.790 [2024-12-05 10:02:38.306559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:50.790 [2024-12-05 10:02:38.306567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.306603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:50.790 [2024-12-05 10:02:38.306628] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:50.790 [2024-12-05 10:02:38.306664] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:50.790 [2024-12-05 10:02:38.306684] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:50.790 [2024-12-05 10:02:38.306791] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:50.790 [2024-12-05 10:02:38.306803] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:50.790 [2024-12-05 10:02:38.306814] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:50.790 [2024-12-05 10:02:38.306825] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:50.790 [2024-12-05 10:02:38.306834] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:50.790 [2024-12-05 10:02:38.306842] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:50.790 [2024-12-05 10:02:38.306850] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:50.790 [2024-12-05 10:02:38.306857] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:50.790 [2024-12-05 10:02:38.306865] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:50.790 [2024-12-05 10:02:38.306876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.306883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:50.790 [2024-12-05 10:02:38.306892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:30:50.790 [2024-12-05 10:02:38.306899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.306985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.790 [2024-12-05 10:02:38.306994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:50.790 [2024-12-05 10:02:38.307003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:50.790 [2024-12-05 10:02:38.307010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.790 [2024-12-05 10:02:38.307112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:50.790 [2024-12-05 10:02:38.307126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:50.790 [2024-12-05 10:02:38.307134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:50.790 [2024-12-05 10:02:38.307142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.790 [2024-12-05 10:02:38.307151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:50.791 [2024-12-05 10:02:38.307157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:50.791 [2024-12-05 10:02:38.307171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:50.791 [2024-12-05 10:02:38.307178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:50.791 [2024-12-05 10:02:38.307185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:50.791 [2024-12-05 10:02:38.307199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:50.791 [2024-12-05 10:02:38.307205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:50.791 [2024-12-05 10:02:38.307219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:50.791 [2024-12-05 10:02:38.307230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:50.791 [2024-12-05 10:02:38.307245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:50.791 [2024-12-05 10:02:38.307252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:50.791 [2024-12-05 10:02:38.307268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:50.791 [2024-12-05 10:02:38.307281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:50.791 [2024-12-05 10:02:38.307288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:50.791 [2024-12-05 10:02:38.307295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:50.791 [2024-12-05 10:02:38.307302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:50.791 [2024-12-05 10:02:38.307308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:50.791 [2024-12-05 10:02:38.307314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:50.791 [2024-12-05 10:02:38.307321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:50.791 [2024-12-05 10:02:38.307328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:50.791 [2024-12-05 10:02:38.307334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:50.791 [2024-12-05 10:02:38.307340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:50.791 [2024-12-05 10:02:38.307347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:50.791 [2024-12-05 10:02:38.307354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:50.791 [2024-12-05 10:02:38.307360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:50.791 [2024-12-05 10:02:38.307373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:50.791 [2024-12-05 10:02:38.307379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:50.791 [2024-12-05 10:02:38.307392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:50.791 [2024-12-05 10:02:38.307412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:50.791 [2024-12-05 10:02:38.307419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307425] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:50.791 [2024-12-05 10:02:38.307441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:50.791 [2024-12-05 10:02:38.307449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:50.791 [2024-12-05 10:02:38.307457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.791 [2024-12-05 10:02:38.307466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:50.791 [2024-12-05 10:02:38.307473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:50.791 [2024-12-05 10:02:38.307480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:50.791 [2024-12-05 10:02:38.307487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:50.791 [2024-12-05 10:02:38.307494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:50.791 [2024-12-05 10:02:38.307501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:50.791 [2024-12-05 10:02:38.307521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:50.791 [2024-12-05 10:02:38.307532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:50.791 [2024-12-05 10:02:38.307549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:50.791 [2024-12-05 10:02:38.307571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:50.791 [2024-12-05 10:02:38.307579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:50.791 [2024-12-05 10:02:38.307587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:50.791 [2024-12-05 10:02:38.307594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:50.791 [2024-12-05 10:02:38.307647] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:50.791 [2024-12-05 10:02:38.307656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:50.791 [2024-12-05 10:02:38.307682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:50.791 [2024-12-05 10:02:38.307691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:50.791 [2024-12-05 10:02:38.307698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:50.791 [2024-12-05 10:02:38.307706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.791 [2024-12-05 10:02:38.307713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:50.791 [2024-12-05 10:02:38.307721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.664 ms 00:30:50.791 [2024-12-05 10:02:38.307728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.792 [2024-12-05 10:02:38.338033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.792 [2024-12-05 10:02:38.338081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:50.792 [2024-12-05 10:02:38.338094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.254 ms 00:30:50.792 [2024-12-05 10:02:38.338103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.792 [2024-12-05 10:02:38.338149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.792 [2024-12-05 10:02:38.338158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:50.792 [2024-12-05 10:02:38.338168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:50.792 [2024-12-05 10:02:38.338176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.792 [2024-12-05 10:02:38.373449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.792 [2024-12-05 10:02:38.373493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:50.792 [2024-12-05 10:02:38.373505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.209 ms 00:30:50.792 [2024-12-05 10:02:38.373531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.792 [2024-12-05 10:02:38.373575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.792 [2024-12-05 10:02:38.373585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:50.792 [2024-12-05 10:02:38.373595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:50.792 [2024-12-05 10:02:38.373606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.792 [2024-12-05 10:02:38.373727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.792 [2024-12-05 10:02:38.373739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:50.792 [2024-12-05 10:02:38.373749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:30:50.792 [2024-12-05 10:02:38.373757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.792 [2024-12-05 10:02:38.373805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.792 [2024-12-05 10:02:38.373815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:50.792 [2024-12-05 10:02:38.373823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:50.792 [2024-12-05 10:02:38.373831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.792 [2024-12-05 10:02:38.392360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.792 [2024-12-05 10:02:38.392581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:50.792 [2024-12-05 10:02:38.392603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.502 ms 00:30:50.792 [2024-12-05 10:02:38.392616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.792 [2024-12-05 10:02:38.392736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.792 [2024-12-05 10:02:38.392749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:50.792 [2024-12-05 10:02:38.392758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:50.792 [2024-12-05 10:02:38.392767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.052 [2024-12-05 10:02:38.435823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.052 [2024-12-05 10:02:38.435862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:51.052 [2024-12-05 10:02:38.435875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.036 ms 00:30:51.052 [2024-12-05 10:02:38.435883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.052 [2024-12-05 10:02:38.445126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.052 [2024-12-05 10:02:38.445249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:51.052 [2024-12-05 10:02:38.445274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:30:51.052 [2024-12-05 10:02:38.445282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.052 [2024-12-05 10:02:38.500621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.052 [2024-12-05 10:02:38.500673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:51.052 [2024-12-05 10:02:38.500685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.285 ms 00:30:51.052 [2024-12-05 10:02:38.500693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.052 [2024-12-05 10:02:38.500823] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:51.052 [2024-12-05 10:02:38.500913] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:51.052 [2024-12-05 10:02:38.501000] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:51.052 [2024-12-05 10:02:38.501090] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:51.052 [2024-12-05 10:02:38.501099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.052 [2024-12-05 10:02:38.501107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:51.052 [2024-12-05 10:02:38.501115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:30:51.052 [2024-12-05 10:02:38.501123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.052 [2024-12-05 10:02:38.501189] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:51.052 [2024-12-05 10:02:38.501201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.052 [2024-12-05 10:02:38.501211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:51.052 [2024-12-05 10:02:38.501219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:51.052 [2024-12-05 10:02:38.501227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.052 [2024-12-05 10:02:38.516632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.052 [2024-12-05 10:02:38.516674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:51.052 [2024-12-05 10:02:38.516685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.384 ms 00:30:51.052 [2024-12-05 10:02:38.516693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.052 [2024-12-05 10:02:38.525131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.052 [2024-12-05 10:02:38.525163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:51.052 [2024-12-05 10:02:38.525173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:51.053 [2024-12-05 10:02:38.525181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.053 [2024-12-05 10:02:38.525265] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:51.053 [2024-12-05 10:02:38.525410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.053 [2024-12-05 10:02:38.525421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:51.053 [2024-12-05 10:02:38.525429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.146 ms 00:30:51.053 [2024-12-05 10:02:38.525437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.622 [2024-12-05 10:02:39.205209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.622 [2024-12-05 10:02:39.205305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:51.622 [2024-12-05 10:02:39.205323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 678.882 ms 00:30:51.622 [2024-12-05 10:02:39.205333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.622 [2024-12-05 10:02:39.210118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.622 [2024-12-05 10:02:39.210170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:51.622 [2024-12-05 10:02:39.210182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.631 ms 00:30:51.622 [2024-12-05 10:02:39.210191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.622 [2024-12-05 10:02:39.211276] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:51.622 [2024-12-05 10:02:39.211325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.622 [2024-12-05 10:02:39.211336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:51.622 [2024-12-05 10:02:39.211347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.094 ms 00:30:51.622 [2024-12-05 10:02:39.211355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.623 [2024-12-05 10:02:39.211396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.623 [2024-12-05 10:02:39.211408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:51.623 [2024-12-05 10:02:39.211417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:51.623 [2024-12-05 10:02:39.211432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.623 [2024-12-05 10:02:39.211469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 686.199 ms, result 0 00:30:51.623 [2024-12-05 10:02:39.211530] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:51.623 [2024-12-05 10:02:39.211645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.623 [2024-12-05 10:02:39.211660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:51.623 [2024-12-05 10:02:39.211669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.116 ms 00:30:51.623 [2024-12-05 10:02:39.211677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.038074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.038326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:52.565 [2024-12-05 10:02:40.038370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 825.279 ms 00:30:52.565 [2024-12-05 10:02:40.038381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.043209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.043261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:52.565 [2024-12-05 10:02:40.043273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.554 ms 00:30:52.565 [2024-12-05 10:02:40.043281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.044532] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:52.565 [2024-12-05 10:02:40.044578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.044587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:52.565 [2024-12-05 10:02:40.044596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.264 ms 00:30:52.565 [2024-12-05 10:02:40.044604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.044642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.044651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:52.565 [2024-12-05 10:02:40.044660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:52.565 [2024-12-05 10:02:40.044668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.044707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 833.187 ms, result 0 00:30:52.565 [2024-12-05 10:02:40.044754] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:52.565 [2024-12-05 10:02:40.044765] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:52.565 [2024-12-05 10:02:40.044776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.044785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:52.565 [2024-12-05 10:02:40.044794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1519.527 ms 00:30:52.565 [2024-12-05 10:02:40.044802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.044834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.044847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:52.565 [2024-12-05 10:02:40.044857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:52.565 [2024-12-05 10:02:40.044865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.057458] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:52.565 [2024-12-05 10:02:40.057631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.057643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:52.565 [2024-12-05 10:02:40.057654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.749 ms 00:30:52.565 [2024-12-05 10:02:40.057663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.058381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.058410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:52.565 [2024-12-05 10:02:40.058424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:30:52.565 [2024-12-05 10:02:40.058432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.060686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.060712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:52.565 [2024-12-05 10:02:40.060723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.236 ms 00:30:52.565 [2024-12-05 10:02:40.060732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.060775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.060786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:52.565 [2024-12-05 10:02:40.060795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:52.565 [2024-12-05 10:02:40.060807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.565 [2024-12-05 10:02:40.060915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.565 [2024-12-05 10:02:40.060926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:52.565 [2024-12-05 10:02:40.060934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:52.566 [2024-12-05 10:02:40.060942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.566 [2024-12-05 10:02:40.060963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.566 [2024-12-05 10:02:40.060971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:52.566 [2024-12-05 10:02:40.060980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:52.566 [2024-12-05 10:02:40.060994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.566 [2024-12-05 10:02:40.061030] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:52.566 [2024-12-05 10:02:40.061041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.566 [2024-12-05 10:02:40.061049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:52.566 [2024-12-05 10:02:40.061057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:52.566 [2024-12-05 10:02:40.061065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.566 [2024-12-05 10:02:40.061119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.566 [2024-12-05 10:02:40.061129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:52.566 [2024-12-05 10:02:40.061137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:52.566 [2024-12-05 10:02:40.061145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.566 [2024-12-05 10:02:40.062255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1789.319 ms, result 0 00:30:52.566 [2024-12-05 10:02:40.077988] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.566 [2024-12-05 10:02:40.093992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:52.566 [2024-12-05 10:02:40.103110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:52.566 Validate MD5 checksum, iteration 1 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:52.566 10:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:52.827 [2024-12-05 10:02:40.219546] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:52.827 [2024-12-05 10:02:40.219926] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83727 ] 00:30:52.827 [2024-12-05 10:02:40.375782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.088 [2024-12-05 10:02:40.503442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.471  [2024-12-05T10:02:43.043Z] Copying: 618/1024 [MB] (618 MBps) [2024-12-05T10:02:43.987Z] Copying: 1024/1024 [MB] (average 631 MBps) 00:30:56.358 00:30:56.358 10:02:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:56.358 10:02:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=86ac81660784687901b27160df4e2860 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 86ac81660784687901b27160df4e2860 != \8\6\a\c\8\1\6\6\0\7\8\4\6\8\7\9\0\1\b\2\7\1\6\0\d\f\4\e\2\8\6\0 ]] 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:58.333 Validate MD5 checksum, iteration 2 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:58.333 10:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:58.333 [2024-12-05 10:02:45.883132] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:58.333 [2024-12-05 10:02:45.883388] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83789 ] 00:30:58.592 [2024-12-05 10:02:46.041588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.592 [2024-12-05 10:02:46.120111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.975  [2024-12-05T10:02:48.174Z] Copying: 638/1024 [MB] (638 MBps) [2024-12-05T10:02:53.450Z] Copying: 1024/1024 [MB] (average 638 MBps) 00:31:05.821 00:31:05.821 10:02:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:05.821 10:02:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=081c828a6dd46c4fbd5fdafa40128104 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 081c828a6dd46c4fbd5fdafa40128104 != \0\8\1\c\8\2\8\a\6\d\d\4\6\c\4\f\b\d\5\f\d\a\f\a\4\0\1\2\8\1\0\4 ]] 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83686 ]] 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83686 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83686 ']' 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83686 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83686 00:31:08.350 killing process with pid 83686 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83686' 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83686 00:31:08.350 10:02:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83686 00:31:08.610 [2024-12-05 10:02:56.121294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:08.610 [2024-12-05 10:02:56.131832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.131868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:08.610 [2024-12-05 10:02:56.131881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:08.610 [2024-12-05 10:02:56.131889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.131908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:08.610 [2024-12-05 10:02:56.134079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.134111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:08.610 [2024-12-05 10:02:56.134120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.160 ms 00:31:08.610 [2024-12-05 10:02:56.134126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.134315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.134325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:08.610 [2024-12-05 10:02:56.134332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:31:08.610 [2024-12-05 10:02:56.134339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.135760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.135783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:08.610 [2024-12-05 10:02:56.135791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.409 ms 00:31:08.610 [2024-12-05 10:02:56.135801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.136711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.136735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:08.610 [2024-12-05 10:02:56.136744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.860 ms 00:31:08.610 [2024-12-05 10:02:56.136751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.145187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.145212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:08.610 [2024-12-05 10:02:56.145224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.407 ms 00:31:08.610 [2024-12-05 10:02:56.145230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.149758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.149783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:08.610 [2024-12-05 10:02:56.149792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.501 ms 00:31:08.610 [2024-12-05 10:02:56.149799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.149860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.149868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:08.610 [2024-12-05 10:02:56.149875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:08.610 [2024-12-05 10:02:56.149884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.157289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.157411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:08.610 [2024-12-05 10:02:56.157423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.392 ms 00:31:08.610 [2024-12-05 10:02:56.157429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.164848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.164929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:08.610 [2024-12-05 10:02:56.164974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.395 ms 00:31:08.610 [2024-12-05 10:02:56.164991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.172086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.172184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:08.610 [2024-12-05 10:02:56.172289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.064 ms 00:31:08.610 [2024-12-05 10:02:56.172306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.179221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.610 [2024-12-05 10:02:56.179310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:08.610 [2024-12-05 10:02:56.179349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.863 ms 00:31:08.610 [2024-12-05 10:02:56.179366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.610 [2024-12-05 10:02:56.179414] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:08.610 [2024-12-05 10:02:56.179442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:08.610 [2024-12-05 10:02:56.179467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:08.610 [2024-12-05 10:02:56.179489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:08.610 [2024-12-05 10:02:56.179521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.179950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.180204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:08.610 [2024-12-05 10:02:56.180230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:08.611 [2024-12-05 10:02:56.180269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:08.611 [2024-12-05 10:02:56.180294] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:08.611 [2024-12-05 10:02:56.180310] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4907301a-5f90-480a-83c6-a47abfc049a4 00:31:08.611 [2024-12-05 10:02:56.180335] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:08.611 [2024-12-05 10:02:56.180378] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:08.611 [2024-12-05 10:02:56.180395] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:08.611 [2024-12-05 10:02:56.180411] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:08.611 [2024-12-05 10:02:56.180425] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:08.611 [2024-12-05 10:02:56.180442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:08.611 [2024-12-05 10:02:56.180460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:08.611 [2024-12-05 10:02:56.180492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:08.611 [2024-12-05 10:02:56.180507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:08.611 [2024-12-05 10:02:56.180532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.611 [2024-12-05 10:02:56.180549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:08.611 [2024-12-05 10:02:56.180564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.119 ms 00:31:08.611 [2024-12-05 10:02:56.180579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.611 [2024-12-05 10:02:56.190530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.611 [2024-12-05 10:02:56.190610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:08.611 [2024-12-05 10:02:56.190648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.917 ms 00:31:08.611 [2024-12-05 10:02:56.190665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.611 [2024-12-05 10:02:56.190967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.611 [2024-12-05 10:02:56.190993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:08.611 [2024-12-05 10:02:56.191033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:31:08.611 [2024-12-05 10:02:56.191050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.611 [2024-12-05 10:02:56.226179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.611 [2024-12-05 10:02:56.226267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:08.611 [2024-12-05 10:02:56.226307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.611 [2024-12-05 10:02:56.226330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.611 [2024-12-05 10:02:56.226364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.611 [2024-12-05 10:02:56.226380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:08.611 [2024-12-05 10:02:56.226396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.611 [2024-12-05 10:02:56.226410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.611 [2024-12-05 10:02:56.226489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.611 [2024-12-05 10:02:56.226520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:08.611 [2024-12-05 10:02:56.226537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.611 [2024-12-05 10:02:56.226580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.611 [2024-12-05 10:02:56.226610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.611 [2024-12-05 10:02:56.226626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:08.611 [2024-12-05 10:02:56.226641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.611 [2024-12-05 10:02:56.226655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.871 [2024-12-05 10:02:56.288378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.871 [2024-12-05 10:02:56.288498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:08.871 [2024-12-05 10:02:56.288552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.871 [2024-12-05 10:02:56.288570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.871 [2024-12-05 10:02:56.338892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.871 [2024-12-05 10:02:56.339011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:08.871 [2024-12-05 10:02:56.339051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.871 [2024-12-05 10:02:56.339068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.871 [2024-12-05 10:02:56.339139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.871 [2024-12-05 10:02:56.339158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:08.871 [2024-12-05 10:02:56.339174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.871 [2024-12-05 10:02:56.339190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.871 [2024-12-05 10:02:56.339248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.871 [2024-12-05 10:02:56.339277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:08.871 [2024-12-05 10:02:56.339294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.871 [2024-12-05 10:02:56.339339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.871 [2024-12-05 10:02:56.339435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.871 [2024-12-05 10:02:56.339503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:08.871 [2024-12-05 10:02:56.339558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.872 [2024-12-05 10:02:56.339576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.872 [2024-12-05 10:02:56.339617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.872 [2024-12-05 10:02:56.339637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:08.872 [2024-12-05 10:02:56.339656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.872 [2024-12-05 10:02:56.339678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.872 [2024-12-05 10:02:56.339723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.872 [2024-12-05 10:02:56.339741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:08.872 [2024-12-05 10:02:56.339755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.872 [2024-12-05 10:02:56.339770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.872 [2024-12-05 10:02:56.339845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.872 [2024-12-05 10:02:56.339868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:08.872 [2024-12-05 10:02:56.339885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.872 [2024-12-05 10:02:56.339898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.872 [2024-12-05 10:02:56.340016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 208.151 ms, result 0 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:09.841 Remove shared memory files 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83472 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:09.841 ************************************ 00:31:09.841 END TEST ftl_upgrade_shutdown 00:31:09.841 ************************************ 00:31:09.841 00:31:09.841 real 1m27.881s 00:31:09.841 user 1m59.645s 00:31:09.841 sys 0m20.048s 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.841 10:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:09.841 10:02:57 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:09.841 10:02:57 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:09.841 10:02:57 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:09.841 10:02:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.841 10:02:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:09.841 ************************************ 00:31:09.841 START TEST ftl_restore_fast 00:31:09.841 ************************************ 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:09.841 * Looking for test storage... 00:31:09.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:09.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.841 --rc genhtml_branch_coverage=1 00:31:09.841 --rc genhtml_function_coverage=1 00:31:09.841 --rc genhtml_legend=1 00:31:09.841 --rc geninfo_all_blocks=1 00:31:09.841 --rc geninfo_unexecuted_blocks=1 00:31:09.841 00:31:09.841 ' 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:09.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.841 --rc genhtml_branch_coverage=1 00:31:09.841 --rc genhtml_function_coverage=1 00:31:09.841 --rc genhtml_legend=1 00:31:09.841 --rc geninfo_all_blocks=1 00:31:09.841 --rc geninfo_unexecuted_blocks=1 00:31:09.841 00:31:09.841 ' 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:09.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.841 --rc genhtml_branch_coverage=1 00:31:09.841 --rc genhtml_function_coverage=1 00:31:09.841 --rc genhtml_legend=1 00:31:09.841 --rc geninfo_all_blocks=1 00:31:09.841 --rc geninfo_unexecuted_blocks=1 00:31:09.841 00:31:09.841 ' 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:09.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.841 --rc genhtml_branch_coverage=1 00:31:09.841 --rc genhtml_function_coverage=1 00:31:09.841 --rc genhtml_legend=1 00:31:09.841 --rc geninfo_all_blocks=1 00:31:09.841 --rc geninfo_unexecuted_blocks=1 00:31:09.841 00:31:09.841 ' 00:31:09.841 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.rR2MCYFfEn 00:31:09.842 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=83984 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 83984 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 83984 ']' 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:10.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:10.101 10:02:57 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:10.101 [2024-12-05 10:02:57.520759] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:10.102 [2024-12-05 10:02:57.521023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83984 ] 00:31:10.102 [2024-12-05 10:02:57.679393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.362 [2024-12-05 10:02:57.778237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.931 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.931 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:10.931 10:02:58 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:10.931 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:10.931 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:10.931 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:10.931 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:10.931 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:11.192 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:11.192 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:11.192 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:11.192 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:11.192 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:11.192 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:11.192 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:11.192 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:11.454 { 00:31:11.454 "name": "nvme0n1", 00:31:11.454 "aliases": [ 00:31:11.454 "df08b724-c76a-414b-896b-92d9a5fa88b8" 00:31:11.454 ], 00:31:11.454 "product_name": "NVMe disk", 00:31:11.454 "block_size": 4096, 00:31:11.454 "num_blocks": 1310720, 00:31:11.454 "uuid": "df08b724-c76a-414b-896b-92d9a5fa88b8", 00:31:11.454 "numa_id": -1, 00:31:11.454 "assigned_rate_limits": { 00:31:11.454 "rw_ios_per_sec": 0, 00:31:11.454 "rw_mbytes_per_sec": 0, 00:31:11.454 "r_mbytes_per_sec": 0, 00:31:11.454 "w_mbytes_per_sec": 0 00:31:11.454 }, 00:31:11.454 "claimed": true, 00:31:11.454 "claim_type": "read_many_write_one", 00:31:11.454 "zoned": false, 00:31:11.454 "supported_io_types": { 00:31:11.454 "read": true, 00:31:11.454 "write": true, 00:31:11.454 "unmap": true, 00:31:11.454 "flush": true, 00:31:11.454 "reset": true, 00:31:11.454 "nvme_admin": true, 00:31:11.454 "nvme_io": true, 00:31:11.454 "nvme_io_md": false, 00:31:11.454 "write_zeroes": true, 00:31:11.454 "zcopy": false, 00:31:11.454 "get_zone_info": false, 00:31:11.454 "zone_management": false, 00:31:11.454 "zone_append": false, 00:31:11.454 "compare": true, 00:31:11.454 "compare_and_write": false, 00:31:11.454 "abort": true, 00:31:11.454 "seek_hole": false, 00:31:11.454 "seek_data": false, 00:31:11.454 "copy": true, 00:31:11.454 "nvme_iov_md": false 00:31:11.454 }, 00:31:11.454 "driver_specific": { 00:31:11.454 "nvme": [ 00:31:11.454 { 00:31:11.454 "pci_address": "0000:00:11.0", 00:31:11.454 "trid": { 00:31:11.454 "trtype": "PCIe", 00:31:11.454 "traddr": "0000:00:11.0" 00:31:11.454 }, 00:31:11.454 "ctrlr_data": { 00:31:11.454 "cntlid": 0, 00:31:11.454 "vendor_id": "0x1b36", 00:31:11.454 "model_number": "QEMU NVMe Ctrl", 00:31:11.454 "serial_number": "12341", 00:31:11.454 "firmware_revision": "8.0.0", 00:31:11.454 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:11.454 "oacs": { 00:31:11.454 "security": 0, 00:31:11.454 "format": 1, 00:31:11.454 "firmware": 0, 00:31:11.454 "ns_manage": 1 00:31:11.454 }, 00:31:11.454 "multi_ctrlr": false, 00:31:11.454 "ana_reporting": false 00:31:11.454 }, 00:31:11.454 "vs": { 00:31:11.454 "nvme_version": "1.4" 00:31:11.454 }, 00:31:11.454 "ns_data": { 00:31:11.454 "id": 1, 00:31:11.454 "can_share": false 00:31:11.454 } 00:31:11.454 } 00:31:11.454 ], 00:31:11.454 "mp_policy": "active_passive" 00:31:11.454 } 00:31:11.454 } 00:31:11.454 ]' 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:11.454 10:02:58 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:11.717 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=c077cd9c-57e1-4387-8c75-16527a69c0b7 00:31:11.717 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:11.717 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c077cd9c-57e1-4387-8c75-16527a69c0b7 00:31:11.717 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:11.977 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=af9fc5c5-c8db-4c17-b39a-444db5a80183 00:31:11.977 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u af9fc5c5-c8db-4c17-b39a-444db5a80183 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=336e2120-2d63-4037-824d-f7a355cdfa77 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 336e2120-2d63-4037-824d-f7a355cdfa77 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=336e2120-2d63-4037-824d-f7a355cdfa77 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 336e2120-2d63-4037-824d-f7a355cdfa77 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=336e2120-2d63-4037-824d-f7a355cdfa77 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:12.237 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 336e2120-2d63-4037-824d-f7a355cdfa77 00:31:12.496 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:12.496 { 00:31:12.496 "name": "336e2120-2d63-4037-824d-f7a355cdfa77", 00:31:12.496 "aliases": [ 00:31:12.496 "lvs/nvme0n1p0" 00:31:12.496 ], 00:31:12.496 "product_name": "Logical Volume", 00:31:12.496 "block_size": 4096, 00:31:12.496 "num_blocks": 26476544, 00:31:12.496 "uuid": "336e2120-2d63-4037-824d-f7a355cdfa77", 00:31:12.496 "assigned_rate_limits": { 00:31:12.496 "rw_ios_per_sec": 0, 00:31:12.496 "rw_mbytes_per_sec": 0, 00:31:12.496 "r_mbytes_per_sec": 0, 00:31:12.496 "w_mbytes_per_sec": 0 00:31:12.496 }, 00:31:12.496 "claimed": false, 00:31:12.496 "zoned": false, 00:31:12.496 "supported_io_types": { 00:31:12.496 "read": true, 00:31:12.496 "write": true, 00:31:12.496 "unmap": true, 00:31:12.496 "flush": false, 00:31:12.496 "reset": true, 00:31:12.496 "nvme_admin": false, 00:31:12.496 "nvme_io": false, 00:31:12.496 "nvme_io_md": false, 00:31:12.496 "write_zeroes": true, 00:31:12.496 "zcopy": false, 00:31:12.496 "get_zone_info": false, 00:31:12.496 "zone_management": false, 00:31:12.496 "zone_append": false, 00:31:12.496 "compare": false, 00:31:12.496 "compare_and_write": false, 00:31:12.496 "abort": false, 00:31:12.496 "seek_hole": true, 00:31:12.496 "seek_data": true, 00:31:12.496 "copy": false, 00:31:12.496 "nvme_iov_md": false 00:31:12.496 }, 00:31:12.496 "driver_specific": { 00:31:12.496 "lvol": { 00:31:12.496 "lvol_store_uuid": "af9fc5c5-c8db-4c17-b39a-444db5a80183", 00:31:12.496 "base_bdev": "nvme0n1", 00:31:12.496 "thin_provision": true, 00:31:12.496 "num_allocated_clusters": 0, 00:31:12.496 "snapshot": false, 00:31:12.496 "clone": false, 00:31:12.496 "esnap_clone": false 00:31:12.496 } 00:31:12.496 } 00:31:12.496 } 00:31:12.496 ]' 00:31:12.496 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:12.496 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:12.496 10:02:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:12.496 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:12.496 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:12.496 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:12.496 10:03:00 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:12.496 10:03:00 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:12.496 10:03:00 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:12.757 10:03:00 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:12.757 10:03:00 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:12.757 10:03:00 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 336e2120-2d63-4037-824d-f7a355cdfa77 00:31:12.757 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=336e2120-2d63-4037-824d-f7a355cdfa77 00:31:12.757 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:12.757 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:12.757 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:12.757 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 336e2120-2d63-4037-824d-f7a355cdfa77 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:13.018 { 00:31:13.018 "name": "336e2120-2d63-4037-824d-f7a355cdfa77", 00:31:13.018 "aliases": [ 00:31:13.018 "lvs/nvme0n1p0" 00:31:13.018 ], 00:31:13.018 "product_name": "Logical Volume", 00:31:13.018 "block_size": 4096, 00:31:13.018 "num_blocks": 26476544, 00:31:13.018 "uuid": "336e2120-2d63-4037-824d-f7a355cdfa77", 00:31:13.018 "assigned_rate_limits": { 00:31:13.018 "rw_ios_per_sec": 0, 00:31:13.018 "rw_mbytes_per_sec": 0, 00:31:13.018 "r_mbytes_per_sec": 0, 00:31:13.018 "w_mbytes_per_sec": 0 00:31:13.018 }, 00:31:13.018 "claimed": false, 00:31:13.018 "zoned": false, 00:31:13.018 "supported_io_types": { 00:31:13.018 "read": true, 00:31:13.018 "write": true, 00:31:13.018 "unmap": true, 00:31:13.018 "flush": false, 00:31:13.018 "reset": true, 00:31:13.018 "nvme_admin": false, 00:31:13.018 "nvme_io": false, 00:31:13.018 "nvme_io_md": false, 00:31:13.018 "write_zeroes": true, 00:31:13.018 "zcopy": false, 00:31:13.018 "get_zone_info": false, 00:31:13.018 "zone_management": false, 00:31:13.018 "zone_append": false, 00:31:13.018 "compare": false, 00:31:13.018 "compare_and_write": false, 00:31:13.018 "abort": false, 00:31:13.018 "seek_hole": true, 00:31:13.018 "seek_data": true, 00:31:13.018 "copy": false, 00:31:13.018 "nvme_iov_md": false 00:31:13.018 }, 00:31:13.018 "driver_specific": { 00:31:13.018 "lvol": { 00:31:13.018 "lvol_store_uuid": "af9fc5c5-c8db-4c17-b39a-444db5a80183", 00:31:13.018 "base_bdev": "nvme0n1", 00:31:13.018 "thin_provision": true, 00:31:13.018 "num_allocated_clusters": 0, 00:31:13.018 "snapshot": false, 00:31:13.018 "clone": false, 00:31:13.018 "esnap_clone": false 00:31:13.018 } 00:31:13.018 } 00:31:13.018 } 00:31:13.018 ]' 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:13.018 10:03:00 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:13.278 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:13.278 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 336e2120-2d63-4037-824d-f7a355cdfa77 00:31:13.278 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=336e2120-2d63-4037-824d-f7a355cdfa77 00:31:13.278 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:13.278 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:13.278 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:13.278 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 336e2120-2d63-4037-824d-f7a355cdfa77 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:13.538 { 00:31:13.538 "name": "336e2120-2d63-4037-824d-f7a355cdfa77", 00:31:13.538 "aliases": [ 00:31:13.538 "lvs/nvme0n1p0" 00:31:13.538 ], 00:31:13.538 "product_name": "Logical Volume", 00:31:13.538 "block_size": 4096, 00:31:13.538 "num_blocks": 26476544, 00:31:13.538 "uuid": "336e2120-2d63-4037-824d-f7a355cdfa77", 00:31:13.538 "assigned_rate_limits": { 00:31:13.538 "rw_ios_per_sec": 0, 00:31:13.538 "rw_mbytes_per_sec": 0, 00:31:13.538 "r_mbytes_per_sec": 0, 00:31:13.538 "w_mbytes_per_sec": 0 00:31:13.538 }, 00:31:13.538 "claimed": false, 00:31:13.538 "zoned": false, 00:31:13.538 "supported_io_types": { 00:31:13.538 "read": true, 00:31:13.538 "write": true, 00:31:13.538 "unmap": true, 00:31:13.538 "flush": false, 00:31:13.538 "reset": true, 00:31:13.538 "nvme_admin": false, 00:31:13.538 "nvme_io": false, 00:31:13.538 "nvme_io_md": false, 00:31:13.538 "write_zeroes": true, 00:31:13.538 "zcopy": false, 00:31:13.538 "get_zone_info": false, 00:31:13.538 "zone_management": false, 00:31:13.538 "zone_append": false, 00:31:13.538 "compare": false, 00:31:13.538 "compare_and_write": false, 00:31:13.538 "abort": false, 00:31:13.538 "seek_hole": true, 00:31:13.538 "seek_data": true, 00:31:13.538 "copy": false, 00:31:13.538 "nvme_iov_md": false 00:31:13.538 }, 00:31:13.538 "driver_specific": { 00:31:13.538 "lvol": { 00:31:13.538 "lvol_store_uuid": "af9fc5c5-c8db-4c17-b39a-444db5a80183", 00:31:13.538 "base_bdev": "nvme0n1", 00:31:13.538 "thin_provision": true, 00:31:13.538 "num_allocated_clusters": 0, 00:31:13.538 "snapshot": false, 00:31:13.538 "clone": false, 00:31:13.538 "esnap_clone": false 00:31:13.538 } 00:31:13.538 } 00:31:13.538 } 00:31:13.538 ]' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 336e2120-2d63-4037-824d-f7a355cdfa77 --l2p_dram_limit 10' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:13.538 10:03:00 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 336e2120-2d63-4037-824d-f7a355cdfa77 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:13.801 [2024-12-05 10:03:01.169277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.169317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:13.801 [2024-12-05 10:03:01.169329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:13.801 [2024-12-05 10:03:01.169336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.169385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.169393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:13.801 [2024-12-05 10:03:01.169401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:13.801 [2024-12-05 10:03:01.169407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.169425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:13.801 [2024-12-05 10:03:01.169970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:13.801 [2024-12-05 10:03:01.169991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.169997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:13.801 [2024-12-05 10:03:01.170006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:31:13.801 [2024-12-05 10:03:01.170012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.170060] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0d7f734e-16bb-40f2-894f-12462e7ca1e0 00:31:13.801 [2024-12-05 10:03:01.170981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.171011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:13.801 [2024-12-05 10:03:01.171019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:13.801 [2024-12-05 10:03:01.171028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.175687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.175800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:13.801 [2024-12-05 10:03:01.175812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.628 ms 00:31:13.801 [2024-12-05 10:03:01.175819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.175887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.175895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:13.801 [2024-12-05 10:03:01.175901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:13.801 [2024-12-05 10:03:01.175910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.175938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.175947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:13.801 [2024-12-05 10:03:01.175955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:13.801 [2024-12-05 10:03:01.175962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.175977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:13.801 [2024-12-05 10:03:01.178842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.178930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:13.801 [2024-12-05 10:03:01.178943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.866 ms 00:31:13.801 [2024-12-05 10:03:01.178949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.178978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.178985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:13.801 [2024-12-05 10:03:01.178992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:13.801 [2024-12-05 10:03:01.178998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.179026] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:13.801 [2024-12-05 10:03:01.179139] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:13.801 [2024-12-05 10:03:01.179151] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:13.801 [2024-12-05 10:03:01.179160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:13.801 [2024-12-05 10:03:01.179168] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179175] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179182] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:13.801 [2024-12-05 10:03:01.179187] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:13.801 [2024-12-05 10:03:01.179197] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:13.801 [2024-12-05 10:03:01.179202] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:13.801 [2024-12-05 10:03:01.179209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.179220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:13.801 [2024-12-05 10:03:01.179227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:31:13.801 [2024-12-05 10:03:01.179232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.179299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.801 [2024-12-05 10:03:01.179306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:13.801 [2024-12-05 10:03:01.179313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:13.801 [2024-12-05 10:03:01.179318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.801 [2024-12-05 10:03:01.179401] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:13.801 [2024-12-05 10:03:01.179409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:13.801 [2024-12-05 10:03:01.179416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:13.801 [2024-12-05 10:03:01.179434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:13.801 [2024-12-05 10:03:01.179453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:13.801 [2024-12-05 10:03:01.179465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:13.801 [2024-12-05 10:03:01.179469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:13.801 [2024-12-05 10:03:01.179476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:13.801 [2024-12-05 10:03:01.179482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:13.801 [2024-12-05 10:03:01.179489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:13.801 [2024-12-05 10:03:01.179494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:13.801 [2024-12-05 10:03:01.179507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:13.801 [2024-12-05 10:03:01.179545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:13.801 [2024-12-05 10:03:01.179562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:13.801 [2024-12-05 10:03:01.179580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:13.801 [2024-12-05 10:03:01.179596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:13.801 [2024-12-05 10:03:01.179608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:13.801 [2024-12-05 10:03:01.179616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:13.801 [2024-12-05 10:03:01.179628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:13.801 [2024-12-05 10:03:01.179633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:13.801 [2024-12-05 10:03:01.179639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:13.801 [2024-12-05 10:03:01.179644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:13.801 [2024-12-05 10:03:01.179651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:13.801 [2024-12-05 10:03:01.179656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.801 [2024-12-05 10:03:01.179662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:13.802 [2024-12-05 10:03:01.179667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:13.802 [2024-12-05 10:03:01.179673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.802 [2024-12-05 10:03:01.179678] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:13.802 [2024-12-05 10:03:01.179685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:13.802 [2024-12-05 10:03:01.179691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:13.802 [2024-12-05 10:03:01.179697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.802 [2024-12-05 10:03:01.179703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:13.802 [2024-12-05 10:03:01.179710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:13.802 [2024-12-05 10:03:01.179715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:13.802 [2024-12-05 10:03:01.179721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:13.802 [2024-12-05 10:03:01.179726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:13.802 [2024-12-05 10:03:01.179735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:13.802 [2024-12-05 10:03:01.179742] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:13.802 [2024-12-05 10:03:01.179751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:13.802 [2024-12-05 10:03:01.179758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:13.802 [2024-12-05 10:03:01.179764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:13.802 [2024-12-05 10:03:01.179770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:13.802 [2024-12-05 10:03:01.179776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:13.802 [2024-12-05 10:03:01.179782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:13.802 [2024-12-05 10:03:01.179789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:13.802 [2024-12-05 10:03:01.179795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:13.802 [2024-12-05 10:03:01.179801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:13.802 [2024-12-05 10:03:01.179807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:13.802 [2024-12-05 10:03:01.179815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:13.802 [2024-12-05 10:03:01.179820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:13.802 [2024-12-05 10:03:01.179827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:13.802 [2024-12-05 10:03:01.179832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:13.802 [2024-12-05 10:03:01.179839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:13.802 [2024-12-05 10:03:01.179845] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:13.802 [2024-12-05 10:03:01.179852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:13.802 [2024-12-05 10:03:01.179858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:13.802 [2024-12-05 10:03:01.179865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:13.802 [2024-12-05 10:03:01.179871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:13.802 [2024-12-05 10:03:01.179878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:13.802 [2024-12-05 10:03:01.179884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.802 [2024-12-05 10:03:01.179891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:13.802 [2024-12-05 10:03:01.179897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:31:13.802 [2024-12-05 10:03:01.179904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.802 [2024-12-05 10:03:01.179942] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:13.802 [2024-12-05 10:03:01.179953] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:17.109 [2024-12-05 10:03:04.517737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.517821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:17.109 [2024-12-05 10:03:04.517840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3337.779 ms 00:31:17.109 [2024-12-05 10:03:04.517852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.549300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.549585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:17.109 [2024-12-05 10:03:04.549609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.203 ms 00:31:17.109 [2024-12-05 10:03:04.549621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.549766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.549780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:17.109 [2024-12-05 10:03:04.549791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:17.109 [2024-12-05 10:03:04.549808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.585083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.585293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:17.109 [2024-12-05 10:03:04.585313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.239 ms 00:31:17.109 [2024-12-05 10:03:04.585328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.585364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.585379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:17.109 [2024-12-05 10:03:04.585388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:17.109 [2024-12-05 10:03:04.585406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.586002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.586031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:17.109 [2024-12-05 10:03:04.586044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:31:17.109 [2024-12-05 10:03:04.586055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.586169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.586182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:17.109 [2024-12-05 10:03:04.586194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:31:17.109 [2024-12-05 10:03:04.586209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.603427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.603479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:17.109 [2024-12-05 10:03:04.603491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.199 ms 00:31:17.109 [2024-12-05 10:03:04.603502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.625444] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:17.109 [2024-12-05 10:03:04.629683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.629729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:17.109 [2024-12-05 10:03:04.629747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.071 ms 00:31:17.109 [2024-12-05 10:03:04.629757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.722927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.722987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:17.109 [2024-12-05 10:03:04.723005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.118 ms 00:31:17.109 [2024-12-05 10:03:04.723014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.109 [2024-12-05 10:03:04.723224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.109 [2024-12-05 10:03:04.723242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:17.109 [2024-12-05 10:03:04.723258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:31:17.109 [2024-12-05 10:03:04.723266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.748593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.748778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:17.371 [2024-12-05 10:03:04.748804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.270 ms 00:31:17.371 [2024-12-05 10:03:04.748814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.773252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.773297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:17.371 [2024-12-05 10:03:04.773314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.316 ms 00:31:17.371 [2024-12-05 10:03:04.773323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.773972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.773987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:17.371 [2024-12-05 10:03:04.773999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:31:17.371 [2024-12-05 10:03:04.774010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.861005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.861051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:17.371 [2024-12-05 10:03:04.861071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.951 ms 00:31:17.371 [2024-12-05 10:03:04.861080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.887979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.888196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:17.371 [2024-12-05 10:03:04.888224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.808 ms 00:31:17.371 [2024-12-05 10:03:04.888233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.913720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.913763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:17.371 [2024-12-05 10:03:04.913778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.286 ms 00:31:17.371 [2024-12-05 10:03:04.913785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.939521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.939569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:17.371 [2024-12-05 10:03:04.939585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.676 ms 00:31:17.371 [2024-12-05 10:03:04.939593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.939648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.939658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:17.371 [2024-12-05 10:03:04.939673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:17.371 [2024-12-05 10:03:04.939680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.939769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.371 [2024-12-05 10:03:04.939783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:17.371 [2024-12-05 10:03:04.939795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:17.371 [2024-12-05 10:03:04.939803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.371 [2024-12-05 10:03:04.940960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3771.150 ms, result 0 00:31:17.371 { 00:31:17.371 "name": "ftl0", 00:31:17.371 "uuid": "0d7f734e-16bb-40f2-894f-12462e7ca1e0" 00:31:17.371 } 00:31:17.371 10:03:04 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:17.371 10:03:04 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:17.633 10:03:05 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:17.633 10:03:05 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:17.896 [2024-12-05 10:03:05.392351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.392588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:17.896 [2024-12-05 10:03:05.392613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:17.896 [2024-12-05 10:03:05.392625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.392659] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:17.896 [2024-12-05 10:03:05.395595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.395636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:17.896 [2024-12-05 10:03:05.395651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:31:17.896 [2024-12-05 10:03:05.395660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.395940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.395955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:17.896 [2024-12-05 10:03:05.395967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:31:17.896 [2024-12-05 10:03:05.395977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.399247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.399400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:17.896 [2024-12-05 10:03:05.399419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.251 ms 00:31:17.896 [2024-12-05 10:03:05.399428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.405986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.406131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:17.896 [2024-12-05 10:03:05.406213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.528 ms 00:31:17.896 [2024-12-05 10:03:05.406238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.428551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.428758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:17.896 [2024-12-05 10:03:05.428823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.216 ms 00:31:17.896 [2024-12-05 10:03:05.428842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.442714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.442881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:17.896 [2024-12-05 10:03:05.442959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.779 ms 00:31:17.896 [2024-12-05 10:03:05.442979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.443117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.443143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:17.896 [2024-12-05 10:03:05.443162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:31:17.896 [2024-12-05 10:03:05.443207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.462637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.462751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:17.896 [2024-12-05 10:03:05.462800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.397 ms 00:31:17.896 [2024-12-05 10:03:05.462817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.481120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.481225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:17.896 [2024-12-05 10:03:05.481273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.247 ms 00:31:17.896 [2024-12-05 10:03:05.481290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.498953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.499054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:17.896 [2024-12-05 10:03:05.499100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.622 ms 00:31:17.896 [2024-12-05 10:03:05.499117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.516081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.896 [2024-12-05 10:03:05.516179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:17.896 [2024-12-05 10:03:05.516222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.896 ms 00:31:17.896 [2024-12-05 10:03:05.516239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.896 [2024-12-05 10:03:05.516273] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:17.896 [2024-12-05 10:03:05.516392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:17.896 [2024-12-05 10:03:05.516883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.516906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.516930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.516978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.517985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.518997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.519005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.519011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.519018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:17.897 [2024-12-05 10:03:05.519031] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:17.897 [2024-12-05 10:03:05.519039] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d7f734e-16bb-40f2-894f-12462e7ca1e0 00:31:17.898 [2024-12-05 10:03:05.519045] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:17.898 [2024-12-05 10:03:05.519054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:17.898 [2024-12-05 10:03:05.519062] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:17.898 [2024-12-05 10:03:05.519069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:17.898 [2024-12-05 10:03:05.519075] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:17.898 [2024-12-05 10:03:05.519082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:17.898 [2024-12-05 10:03:05.519088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:17.898 [2024-12-05 10:03:05.519095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:17.898 [2024-12-05 10:03:05.519100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:17.898 [2024-12-05 10:03:05.519107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.898 [2024-12-05 10:03:05.519113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:17.898 [2024-12-05 10:03:05.519121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:31:17.898 [2024-12-05 10:03:05.519128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.528983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.159 [2024-12-05 10:03:05.529068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:18.159 [2024-12-05 10:03:05.529111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.824 ms 00:31:18.159 [2024-12-05 10:03:05.529128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.529415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.159 [2024-12-05 10:03:05.529474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:18.159 [2024-12-05 10:03:05.529546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:31:18.159 [2024-12-05 10:03:05.529565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.562176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.562266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:18.159 [2024-12-05 10:03:05.562312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.562330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.562387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.562420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:18.159 [2024-12-05 10:03:05.562442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.562458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.562550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.562607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:18.159 [2024-12-05 10:03:05.562644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.562661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.562688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.562703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:18.159 [2024-12-05 10:03:05.562741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.562759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.621665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.621763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:18.159 [2024-12-05 10:03:05.621804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.621821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.670072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.670181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:18.159 [2024-12-05 10:03:05.670194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.670202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.670272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.670280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:18.159 [2024-12-05 10:03:05.670288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.670294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.670331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.670339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:18.159 [2024-12-05 10:03:05.670347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.670353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.670428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.670436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:18.159 [2024-12-05 10:03:05.670444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.670450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.670475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.670482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:18.159 [2024-12-05 10:03:05.670490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.670495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.670546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.670554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:18.159 [2024-12-05 10:03:05.670561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.670567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.670603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.159 [2024-12-05 10:03:05.670611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:18.159 [2024-12-05 10:03:05.670619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.159 [2024-12-05 10:03:05.670625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.159 [2024-12-05 10:03:05.670723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.361 ms, result 0 00:31:18.159 true 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 83984 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83984 ']' 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83984 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83984 00:31:18.159 killing process with pid 83984 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83984' 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 83984 00:31:18.159 10:03:05 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 83984 00:31:23.446 10:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:26.748 262144+0 records in 00:31:26.748 262144+0 records out 00:31:26.748 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.19262 s, 336 MB/s 00:31:26.749 10:03:14 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:28.660 10:03:16 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:28.660 [2024-12-05 10:03:16.233173] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:28.660 [2024-12-05 10:03:16.233407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84208 ] 00:31:28.920 [2024-12-05 10:03:16.386937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.920 [2024-12-05 10:03:16.493734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.181 [2024-12-05 10:03:16.782866] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:29.181 [2024-12-05 10:03:16.782953] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:29.444 [2024-12-05 10:03:16.944474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.944567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:29.444 [2024-12-05 10:03:16.944584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:29.444 [2024-12-05 10:03:16.944593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.944653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.944691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:29.444 [2024-12-05 10:03:16.944702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:29.444 [2024-12-05 10:03:16.944711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.944733] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:29.444 [2024-12-05 10:03:16.945479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:29.444 [2024-12-05 10:03:16.945522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.945532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:29.444 [2024-12-05 10:03:16.945542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:31:29.444 [2024-12-05 10:03:16.945550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.947334] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:29.444 [2024-12-05 10:03:16.961948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.961999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:29.444 [2024-12-05 10:03:16.962015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.615 ms 00:31:29.444 [2024-12-05 10:03:16.962024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.962120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.962131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:29.444 [2024-12-05 10:03:16.962140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:29.444 [2024-12-05 10:03:16.962148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.970844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.971033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:29.444 [2024-12-05 10:03:16.971108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.611 ms 00:31:29.444 [2024-12-05 10:03:16.971141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.971240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.971266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:29.444 [2024-12-05 10:03:16.971287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:29.444 [2024-12-05 10:03:16.971306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.971430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.971461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:29.444 [2024-12-05 10:03:16.971483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:29.444 [2024-12-05 10:03:16.971503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.971569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:29.444 [2024-12-05 10:03:16.975833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.976009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:29.444 [2024-12-05 10:03:16.976035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.270 ms 00:31:29.444 [2024-12-05 10:03:16.976044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.976096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.976106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:29.444 [2024-12-05 10:03:16.976115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:29.444 [2024-12-05 10:03:16.976123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.976191] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:29.444 [2024-12-05 10:03:16.976217] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:29.444 [2024-12-05 10:03:16.976256] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:29.444 [2024-12-05 10:03:16.976276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:29.444 [2024-12-05 10:03:16.976384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:29.444 [2024-12-05 10:03:16.976398] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:29.444 [2024-12-05 10:03:16.976409] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:29.444 [2024-12-05 10:03:16.976420] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:29.444 [2024-12-05 10:03:16.976431] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:29.444 [2024-12-05 10:03:16.976440] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:29.444 [2024-12-05 10:03:16.976449] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:29.444 [2024-12-05 10:03:16.976460] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:29.444 [2024-12-05 10:03:16.976468] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:29.444 [2024-12-05 10:03:16.976478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.976486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:29.444 [2024-12-05 10:03:16.976494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:31:29.444 [2024-12-05 10:03:16.976502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.976635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.444 [2024-12-05 10:03:16.976646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:29.444 [2024-12-05 10:03:16.976655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:29.444 [2024-12-05 10:03:16.976664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.444 [2024-12-05 10:03:16.976771] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:29.444 [2024-12-05 10:03:16.976784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:29.444 [2024-12-05 10:03:16.976793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:29.444 [2024-12-05 10:03:16.976801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.444 [2024-12-05 10:03:16.976809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:29.444 [2024-12-05 10:03:16.976816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:29.444 [2024-12-05 10:03:16.976824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:29.444 [2024-12-05 10:03:16.976831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:29.444 [2024-12-05 10:03:16.976842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:29.444 [2024-12-05 10:03:16.976851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:29.444 [2024-12-05 10:03:16.976859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:29.444 [2024-12-05 10:03:16.976866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:29.444 [2024-12-05 10:03:16.976882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:29.445 [2024-12-05 10:03:16.976896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:29.445 [2024-12-05 10:03:16.976908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:29.445 [2024-12-05 10:03:16.976916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.445 [2024-12-05 10:03:16.976922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:29.445 [2024-12-05 10:03:16.976930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:29.445 [2024-12-05 10:03:16.976937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.445 [2024-12-05 10:03:16.976944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:29.445 [2024-12-05 10:03:16.976951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:29.445 [2024-12-05 10:03:16.976958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.445 [2024-12-05 10:03:16.976965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:29.445 [2024-12-05 10:03:16.976973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:29.445 [2024-12-05 10:03:16.976981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.445 [2024-12-05 10:03:16.976988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:29.445 [2024-12-05 10:03:16.976995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:29.445 [2024-12-05 10:03:16.977002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.445 [2024-12-05 10:03:16.977009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:29.445 [2024-12-05 10:03:16.977015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:29.445 [2024-12-05 10:03:16.977021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.445 [2024-12-05 10:03:16.977029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:29.445 [2024-12-05 10:03:16.977037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:29.445 [2024-12-05 10:03:16.977044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:29.445 [2024-12-05 10:03:16.977051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:29.445 [2024-12-05 10:03:16.977057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:29.445 [2024-12-05 10:03:16.977064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:29.445 [2024-12-05 10:03:16.977070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:29.445 [2024-12-05 10:03:16.977077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:29.445 [2024-12-05 10:03:16.977083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.445 [2024-12-05 10:03:16.977091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:29.445 [2024-12-05 10:03:16.977098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:29.445 [2024-12-05 10:03:16.977105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.445 [2024-12-05 10:03:16.977112] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:29.445 [2024-12-05 10:03:16.977120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:29.445 [2024-12-05 10:03:16.977128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:29.445 [2024-12-05 10:03:16.977137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.445 [2024-12-05 10:03:16.977146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:29.445 [2024-12-05 10:03:16.977153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:29.445 [2024-12-05 10:03:16.977160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:29.445 [2024-12-05 10:03:16.977167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:29.445 [2024-12-05 10:03:16.977173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:29.445 [2024-12-05 10:03:16.977181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:29.445 [2024-12-05 10:03:16.977189] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:29.445 [2024-12-05 10:03:16.977198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:29.445 [2024-12-05 10:03:16.977211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:29.445 [2024-12-05 10:03:16.977219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:29.445 [2024-12-05 10:03:16.977227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:29.445 [2024-12-05 10:03:16.977234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:29.445 [2024-12-05 10:03:16.977241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:29.445 [2024-12-05 10:03:16.977248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:29.445 [2024-12-05 10:03:16.977254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:29.445 [2024-12-05 10:03:16.977261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:29.445 [2024-12-05 10:03:16.977269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:29.445 [2024-12-05 10:03:16.977277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:29.445 [2024-12-05 10:03:16.977284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:29.445 [2024-12-05 10:03:16.977290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:29.445 [2024-12-05 10:03:16.977297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:29.445 [2024-12-05 10:03:16.977304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:29.445 [2024-12-05 10:03:16.977311] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:29.445 [2024-12-05 10:03:16.977327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:29.445 [2024-12-05 10:03:16.977335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:29.445 [2024-12-05 10:03:16.977343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:29.445 [2024-12-05 10:03:16.977349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:29.445 [2024-12-05 10:03:16.977357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:29.445 [2024-12-05 10:03:16.977364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.445 [2024-12-05 10:03:16.977372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:29.445 [2024-12-05 10:03:16.977380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:31:29.445 [2024-12-05 10:03:16.977390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.445 [2024-12-05 10:03:17.009755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.445 [2024-12-05 10:03:17.009957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:29.445 [2024-12-05 10:03:17.009976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.317 ms 00:31:29.445 [2024-12-05 10:03:17.009993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.445 [2024-12-05 10:03:17.010094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.445 [2024-12-05 10:03:17.010103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:29.445 [2024-12-05 10:03:17.010113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:29.445 [2024-12-05 10:03:17.010122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.445 [2024-12-05 10:03:17.054315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.445 [2024-12-05 10:03:17.054370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:29.445 [2024-12-05 10:03:17.054384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.124 ms 00:31:29.445 [2024-12-05 10:03:17.054394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.445 [2024-12-05 10:03:17.054447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.445 [2024-12-05 10:03:17.054459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:29.445 [2024-12-05 10:03:17.054472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:29.445 [2024-12-05 10:03:17.054479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.445 [2024-12-05 10:03:17.055116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.445 [2024-12-05 10:03:17.055152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:29.445 [2024-12-05 10:03:17.055163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:31:29.445 [2024-12-05 10:03:17.055171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.445 [2024-12-05 10:03:17.055333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.445 [2024-12-05 10:03:17.055346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:29.445 [2024-12-05 10:03:17.055361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:31:29.445 [2024-12-05 10:03:17.055370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.707 [2024-12-05 10:03:17.071370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.707 [2024-12-05 10:03:17.071420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:29.707 [2024-12-05 10:03:17.071433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.979 ms 00:31:29.707 [2024-12-05 10:03:17.071441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.707 [2024-12-05 10:03:17.085905] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:29.707 [2024-12-05 10:03:17.085960] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:29.707 [2024-12-05 10:03:17.085974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.707 [2024-12-05 10:03:17.085983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:29.707 [2024-12-05 10:03:17.085994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.377 ms 00:31:29.707 [2024-12-05 10:03:17.086002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.707 [2024-12-05 10:03:17.112152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.707 [2024-12-05 10:03:17.112212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:29.707 [2024-12-05 10:03:17.112226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.090 ms 00:31:29.707 [2024-12-05 10:03:17.112234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.707 [2024-12-05 10:03:17.125345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.707 [2024-12-05 10:03:17.125583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:29.707 [2024-12-05 10:03:17.125606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.049 ms 00:31:29.708 [2024-12-05 10:03:17.125614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.138715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.138766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:29.708 [2024-12-05 10:03:17.138778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.001 ms 00:31:29.708 [2024-12-05 10:03:17.138787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.139434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.139464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:29.708 [2024-12-05 10:03:17.139475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:31:29.708 [2024-12-05 10:03:17.139488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.207063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.207126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:29.708 [2024-12-05 10:03:17.207144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.552 ms 00:31:29.708 [2024-12-05 10:03:17.207161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.218555] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:29.708 [2024-12-05 10:03:17.221793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.221840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:29.708 [2024-12-05 10:03:17.221853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.565 ms 00:31:29.708 [2024-12-05 10:03:17.221862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.221954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.221966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:29.708 [2024-12-05 10:03:17.221976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:29.708 [2024-12-05 10:03:17.221985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.222060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.222073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:29.708 [2024-12-05 10:03:17.222082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:29.708 [2024-12-05 10:03:17.222090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.222112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.222121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:29.708 [2024-12-05 10:03:17.222130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:29.708 [2024-12-05 10:03:17.222138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.222175] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:29.708 [2024-12-05 10:03:17.222189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.222197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:29.708 [2024-12-05 10:03:17.222208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:29.708 [2024-12-05 10:03:17.222218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.249010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.249220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:29.708 [2024-12-05 10:03:17.249245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.770 ms 00:31:29.708 [2024-12-05 10:03:17.249264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.249348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.708 [2024-12-05 10:03:17.249360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:29.708 [2024-12-05 10:03:17.249369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:29.708 [2024-12-05 10:03:17.249378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.708 [2024-12-05 10:03:17.250822] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.830 ms, result 0 00:31:30.653  [2024-12-05T10:03:19.304Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-05T10:03:20.269Z] Copying: 31/1024 [MB] (19 MBps) [2024-12-05T10:03:21.656Z] Copying: 50/1024 [MB] (19 MBps) [2024-12-05T10:03:22.599Z] Copying: 69/1024 [MB] (18 MBps) [2024-12-05T10:03:23.266Z] Copying: 92/1024 [MB] (22 MBps) [2024-12-05T10:03:24.655Z] Copying: 109/1024 [MB] (17 MBps) [2024-12-05T10:03:25.597Z] Copying: 123/1024 [MB] (14 MBps) [2024-12-05T10:03:26.541Z] Copying: 134/1024 [MB] (11 MBps) [2024-12-05T10:03:27.485Z] Copying: 152/1024 [MB] (17 MBps) [2024-12-05T10:03:28.430Z] Copying: 184/1024 [MB] (31 MBps) [2024-12-05T10:03:29.371Z] Copying: 203/1024 [MB] (19 MBps) [2024-12-05T10:03:30.314Z] Copying: 217/1024 [MB] (14 MBps) [2024-12-05T10:03:31.703Z] Copying: 237/1024 [MB] (20 MBps) [2024-12-05T10:03:32.276Z] Copying: 257/1024 [MB] (19 MBps) [2024-12-05T10:03:33.664Z] Copying: 276/1024 [MB] (19 MBps) [2024-12-05T10:03:34.609Z] Copying: 287/1024 [MB] (10 MBps) [2024-12-05T10:03:35.549Z] Copying: 298/1024 [MB] (10 MBps) [2024-12-05T10:03:36.491Z] Copying: 318/1024 [MB] (19 MBps) [2024-12-05T10:03:37.434Z] Copying: 340/1024 [MB] (22 MBps) [2024-12-05T10:03:38.379Z] Copying: 363/1024 [MB] (22 MBps) [2024-12-05T10:03:39.325Z] Copying: 378/1024 [MB] (15 MBps) [2024-12-05T10:03:40.271Z] Copying: 392/1024 [MB] (14 MBps) [2024-12-05T10:03:41.660Z] Copying: 418/1024 [MB] (25 MBps) [2024-12-05T10:03:42.605Z] Copying: 436/1024 [MB] (18 MBps) [2024-12-05T10:03:43.549Z] Copying: 446/1024 [MB] (10 MBps) [2024-12-05T10:03:44.493Z] Copying: 468/1024 [MB] (22 MBps) [2024-12-05T10:03:45.442Z] Copying: 495/1024 [MB] (27 MBps) [2024-12-05T10:03:46.381Z] Copying: 510/1024 [MB] (14 MBps) [2024-12-05T10:03:47.325Z] Copying: 528/1024 [MB] (18 MBps) [2024-12-05T10:03:48.270Z] Copying: 551/1024 [MB] (22 MBps) [2024-12-05T10:03:49.661Z] Copying: 581/1024 [MB] (29 MBps) [2024-12-05T10:03:50.607Z] Copying: 591/1024 [MB] (10 MBps) [2024-12-05T10:03:51.551Z] Copying: 604/1024 [MB] (12 MBps) [2024-12-05T10:03:52.525Z] Copying: 614/1024 [MB] (10 MBps) [2024-12-05T10:03:53.465Z] Copying: 624/1024 [MB] (10 MBps) [2024-12-05T10:03:54.457Z] Copying: 655/1024 [MB] (30 MBps) [2024-12-05T10:03:55.408Z] Copying: 667/1024 [MB] (12 MBps) [2024-12-05T10:03:56.350Z] Copying: 677/1024 [MB] (10 MBps) [2024-12-05T10:03:57.294Z] Copying: 716/1024 [MB] (38 MBps) [2024-12-05T10:03:58.684Z] Copying: 743/1024 [MB] (26 MBps) [2024-12-05T10:03:59.628Z] Copying: 770/1024 [MB] (27 MBps) [2024-12-05T10:04:00.573Z] Copying: 789/1024 [MB] (19 MBps) [2024-12-05T10:04:01.517Z] Copying: 803/1024 [MB] (14 MBps) [2024-12-05T10:04:02.460Z] Copying: 824/1024 [MB] (21 MBps) [2024-12-05T10:04:03.404Z] Copying: 857/1024 [MB] (33 MBps) [2024-12-05T10:04:04.349Z] Copying: 881/1024 [MB] (23 MBps) [2024-12-05T10:04:05.302Z] Copying: 900/1024 [MB] (19 MBps) [2024-12-05T10:04:06.687Z] Copying: 912/1024 [MB] (11 MBps) [2024-12-05T10:04:07.633Z] Copying: 931/1024 [MB] (18 MBps) [2024-12-05T10:04:08.580Z] Copying: 955/1024 [MB] (24 MBps) [2024-12-05T10:04:09.522Z] Copying: 989048/1048576 [kB] (10204 kBps) [2024-12-05T10:04:10.465Z] Copying: 999276/1048576 [kB] (10228 kBps) [2024-12-05T10:04:11.514Z] Copying: 985/1024 [MB] (10 MBps) [2024-12-05T10:04:12.142Z] Copying: 997/1024 [MB] (11 MBps) [2024-12-05T10:04:12.142Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 10:04:12.066978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.513 [2024-12-05 10:04:12.067015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:24.513 [2024-12-05 10:04:12.067028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:24.513 [2024-12-05 10:04:12.067034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.513 [2024-12-05 10:04:12.067050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:24.513 [2024-12-05 10:04:12.069181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.513 [2024-12-05 10:04:12.069305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:24.513 [2024-12-05 10:04:12.069324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:32:24.513 [2024-12-05 10:04:12.069330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.513 [2024-12-05 10:04:12.070565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.513 [2024-12-05 10:04:12.070586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:24.513 [2024-12-05 10:04:12.070593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:32:24.513 [2024-12-05 10:04:12.070600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.513 [2024-12-05 10:04:12.070619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.513 [2024-12-05 10:04:12.070626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:24.513 [2024-12-05 10:04:12.070632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:24.513 [2024-12-05 10:04:12.070638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.513 [2024-12-05 10:04:12.070675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.513 [2024-12-05 10:04:12.070682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:24.513 [2024-12-05 10:04:12.070688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:24.513 [2024-12-05 10:04:12.070694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.513 [2024-12-05 10:04:12.070704] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:24.513 [2024-12-05 10:04:12.070714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:24.513 [2024-12-05 10:04:12.070796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.070994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:24.514 [2024-12-05 10:04:12.071308] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:24.514 [2024-12-05 10:04:12.071314] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d7f734e-16bb-40f2-894f-12462e7ca1e0 00:32:24.514 [2024-12-05 10:04:12.071320] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:24.514 [2024-12-05 10:04:12.071326] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:24.514 [2024-12-05 10:04:12.071331] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:24.515 [2024-12-05 10:04:12.071341] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:24.515 [2024-12-05 10:04:12.071346] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:24.515 [2024-12-05 10:04:12.071353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:24.515 [2024-12-05 10:04:12.071358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:24.515 [2024-12-05 10:04:12.071363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:24.515 [2024-12-05 10:04:12.071368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:24.515 [2024-12-05 10:04:12.071373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.515 [2024-12-05 10:04:12.071379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:24.515 [2024-12-05 10:04:12.071384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:32:24.515 [2024-12-05 10:04:12.071390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.515 [2024-12-05 10:04:12.081275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.515 [2024-12-05 10:04:12.081304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:24.515 [2024-12-05 10:04:12.081311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.873 ms 00:32:24.515 [2024-12-05 10:04:12.081317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.515 [2024-12-05 10:04:12.081595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.515 [2024-12-05 10:04:12.081603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:24.515 [2024-12-05 10:04:12.081609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:32:24.515 [2024-12-05 10:04:12.081614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.515 [2024-12-05 10:04:12.107084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.515 [2024-12-05 10:04:12.107118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.515 [2024-12-05 10:04:12.107125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.515 [2024-12-05 10:04:12.107131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.515 [2024-12-05 10:04:12.107171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.515 [2024-12-05 10:04:12.107177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.515 [2024-12-05 10:04:12.107183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.515 [2024-12-05 10:04:12.107189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.515 [2024-12-05 10:04:12.107220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.515 [2024-12-05 10:04:12.107229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.515 [2024-12-05 10:04:12.107235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.515 [2024-12-05 10:04:12.107240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.515 [2024-12-05 10:04:12.107252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.515 [2024-12-05 10:04:12.107258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.515 [2024-12-05 10:04:12.107266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.515 [2024-12-05 10:04:12.107272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.776 [2024-12-05 10:04:12.166761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.776 [2024-12-05 10:04:12.166891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:24.776 [2024-12-05 10:04:12.166904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.776 [2024-12-05 10:04:12.166915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.776 [2024-12-05 10:04:12.215338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.776 [2024-12-05 10:04:12.215368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:24.776 [2024-12-05 10:04:12.215376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.776 [2024-12-05 10:04:12.215382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.776 [2024-12-05 10:04:12.215432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.776 [2024-12-05 10:04:12.215439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:24.776 [2024-12-05 10:04:12.215449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.776 [2024-12-05 10:04:12.215455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.776 [2024-12-05 10:04:12.215480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.776 [2024-12-05 10:04:12.215486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:24.776 [2024-12-05 10:04:12.215492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.776 [2024-12-05 10:04:12.215498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.776 [2024-12-05 10:04:12.215573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.776 [2024-12-05 10:04:12.215582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:24.776 [2024-12-05 10:04:12.215594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.776 [2024-12-05 10:04:12.215601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.776 [2024-12-05 10:04:12.215619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.776 [2024-12-05 10:04:12.215626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:24.776 [2024-12-05 10:04:12.215631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.776 [2024-12-05 10:04:12.215637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.776 [2024-12-05 10:04:12.215662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.776 [2024-12-05 10:04:12.215669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:24.776 [2024-12-05 10:04:12.215675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.776 [2024-12-05 10:04:12.215682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.777 [2024-12-05 10:04:12.215713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.777 [2024-12-05 10:04:12.215720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:24.777 [2024-12-05 10:04:12.215729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.777 [2024-12-05 10:04:12.215735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.777 [2024-12-05 10:04:12.215821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 148.819 ms, result 0 00:32:25.721 00:32:25.721 00:32:25.721 10:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:25.721 [2024-12-05 10:04:13.299824] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:32:25.721 [2024-12-05 10:04:13.299946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84787 ] 00:32:25.983 [2024-12-05 10:04:13.454016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.983 [2024-12-05 10:04:13.529175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.244 [2024-12-05 10:04:13.736558] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:26.244 [2024-12-05 10:04:13.736610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:26.507 [2024-12-05 10:04:13.883632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.507 [2024-12-05 10:04:13.883665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:26.507 [2024-12-05 10:04:13.883676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:26.507 [2024-12-05 10:04:13.883682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.507 [2024-12-05 10:04:13.883716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.507 [2024-12-05 10:04:13.883725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:26.507 [2024-12-05 10:04:13.883732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:26.507 [2024-12-05 10:04:13.883737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.507 [2024-12-05 10:04:13.883750] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:26.507 [2024-12-05 10:04:13.884262] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:26.507 [2024-12-05 10:04:13.884278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.507 [2024-12-05 10:04:13.884284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:26.507 [2024-12-05 10:04:13.884290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:32:26.507 [2024-12-05 10:04:13.884296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.507 [2024-12-05 10:04:13.884611] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:26.507 [2024-12-05 10:04:13.884633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.507 [2024-12-05 10:04:13.884641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:26.507 [2024-12-05 10:04:13.884647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:26.507 [2024-12-05 10:04:13.884653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.884684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.508 [2024-12-05 10:04:13.884691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:26.508 [2024-12-05 10:04:13.884697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:26.508 [2024-12-05 10:04:13.884703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.884893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.508 [2024-12-05 10:04:13.884901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:26.508 [2024-12-05 10:04:13.884907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:32:26.508 [2024-12-05 10:04:13.884913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.884960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.508 [2024-12-05 10:04:13.884966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:26.508 [2024-12-05 10:04:13.884972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:26.508 [2024-12-05 10:04:13.884978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.884993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.508 [2024-12-05 10:04:13.884999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:26.508 [2024-12-05 10:04:13.885006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:26.508 [2024-12-05 10:04:13.885012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.885025] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:26.508 [2024-12-05 10:04:13.887823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.508 [2024-12-05 10:04:13.887846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:26.508 [2024-12-05 10:04:13.887853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:32:26.508 [2024-12-05 10:04:13.887859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.887883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.508 [2024-12-05 10:04:13.887890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:26.508 [2024-12-05 10:04:13.887896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:26.508 [2024-12-05 10:04:13.887901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.887935] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:26.508 [2024-12-05 10:04:13.887951] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:26.508 [2024-12-05 10:04:13.887978] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:26.508 [2024-12-05 10:04:13.887989] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:26.508 [2024-12-05 10:04:13.888069] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:26.508 [2024-12-05 10:04:13.888076] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:26.508 [2024-12-05 10:04:13.888084] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:26.508 [2024-12-05 10:04:13.888091] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888098] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888106] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:26.508 [2024-12-05 10:04:13.888111] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:26.508 [2024-12-05 10:04:13.888117] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:26.508 [2024-12-05 10:04:13.888122] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:26.508 [2024-12-05 10:04:13.888127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.508 [2024-12-05 10:04:13.888133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:26.508 [2024-12-05 10:04:13.888138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:32:26.508 [2024-12-05 10:04:13.888144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.888217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.508 [2024-12-05 10:04:13.888223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:26.508 [2024-12-05 10:04:13.888229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:26.508 [2024-12-05 10:04:13.888236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.508 [2024-12-05 10:04:13.888312] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:26.508 [2024-12-05 10:04:13.888319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:26.508 [2024-12-05 10:04:13.888325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:26.508 [2024-12-05 10:04:13.888342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:26.508 [2024-12-05 10:04:13.888358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:26.508 [2024-12-05 10:04:13.888368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:26.508 [2024-12-05 10:04:13.888373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:26.508 [2024-12-05 10:04:13.888378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:26.508 [2024-12-05 10:04:13.888382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:26.508 [2024-12-05 10:04:13.888388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:26.508 [2024-12-05 10:04:13.888396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:26.508 [2024-12-05 10:04:13.888407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:26.508 [2024-12-05 10:04:13.888423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:26.508 [2024-12-05 10:04:13.888438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:26.508 [2024-12-05 10:04:13.888453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:26.508 [2024-12-05 10:04:13.888467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:26.508 [2024-12-05 10:04:13.888482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:26.508 [2024-12-05 10:04:13.888492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:26.508 [2024-12-05 10:04:13.888497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:26.508 [2024-12-05 10:04:13.888501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:26.508 [2024-12-05 10:04:13.888506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:26.508 [2024-12-05 10:04:13.888520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:26.508 [2024-12-05 10:04:13.888525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:26.508 [2024-12-05 10:04:13.888536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:26.508 [2024-12-05 10:04:13.888541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888546] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:26.508 [2024-12-05 10:04:13.888552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:26.508 [2024-12-05 10:04:13.888557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.508 [2024-12-05 10:04:13.888571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:26.508 [2024-12-05 10:04:13.888576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:26.508 [2024-12-05 10:04:13.888581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:26.508 [2024-12-05 10:04:13.888589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:26.508 [2024-12-05 10:04:13.888594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:26.508 [2024-12-05 10:04:13.888600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:26.508 [2024-12-05 10:04:13.888606] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:26.508 [2024-12-05 10:04:13.888613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:26.508 [2024-12-05 10:04:13.888619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:26.508 [2024-12-05 10:04:13.888624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:26.509 [2024-12-05 10:04:13.888630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:26.509 [2024-12-05 10:04:13.888635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:26.509 [2024-12-05 10:04:13.888640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:26.509 [2024-12-05 10:04:13.888645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:26.509 [2024-12-05 10:04:13.888651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:26.509 [2024-12-05 10:04:13.888656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:26.509 [2024-12-05 10:04:13.888661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:26.509 [2024-12-05 10:04:13.888667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:26.509 [2024-12-05 10:04:13.888672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:26.509 [2024-12-05 10:04:13.888677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:26.509 [2024-12-05 10:04:13.888682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:26.509 [2024-12-05 10:04:13.888687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:26.509 [2024-12-05 10:04:13.888693] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:26.509 [2024-12-05 10:04:13.888699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:26.509 [2024-12-05 10:04:13.888705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:26.509 [2024-12-05 10:04:13.888711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:26.509 [2024-12-05 10:04:13.888716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:26.509 [2024-12-05 10:04:13.888722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:26.509 [2024-12-05 10:04:13.888727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.888732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:26.509 [2024-12-05 10:04:13.888737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:32:26.509 [2024-12-05 10:04:13.888743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.906955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.906979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:26.509 [2024-12-05 10:04:13.906987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.182 ms 00:32:26.509 [2024-12-05 10:04:13.906993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.907055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.907061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:26.509 [2024-12-05 10:04:13.907069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:26.509 [2024-12-05 10:04:13.907075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.947816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.947846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:26.509 [2024-12-05 10:04:13.947855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.705 ms 00:32:26.509 [2024-12-05 10:04:13.947862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.947894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.947902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:26.509 [2024-12-05 10:04:13.947909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:26.509 [2024-12-05 10:04:13.947914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.947983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.947992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:26.509 [2024-12-05 10:04:13.947998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:26.509 [2024-12-05 10:04:13.948004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.948091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.948099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:26.509 [2024-12-05 10:04:13.948106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:26.509 [2024-12-05 10:04:13.948111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.958397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.958500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:26.509 [2024-12-05 10:04:13.958527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.272 ms 00:32:26.509 [2024-12-05 10:04:13.958533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.958619] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:26.509 [2024-12-05 10:04:13.958629] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:26.509 [2024-12-05 10:04:13.958636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.958644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:26.509 [2024-12-05 10:04:13.958651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:26.509 [2024-12-05 10:04:13.958657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.967764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.967785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:26.509 [2024-12-05 10:04:13.967793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.096 ms 00:32:26.509 [2024-12-05 10:04:13.967799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.967883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.967890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:26.509 [2024-12-05 10:04:13.967896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:26.509 [2024-12-05 10:04:13.967904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.967927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.967934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:26.509 [2024-12-05 10:04:13.967944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:26.509 [2024-12-05 10:04:13.967950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.968387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.968395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:26.509 [2024-12-05 10:04:13.968401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:32:26.509 [2024-12-05 10:04:13.968406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.968419] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:26.509 [2024-12-05 10:04:13.968426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.968432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:26.509 [2024-12-05 10:04:13.968438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:26.509 [2024-12-05 10:04:13.968443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.976937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:26.509 [2024-12-05 10:04:13.977038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.977046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:26.509 [2024-12-05 10:04:13.977053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.583 ms 00:32:26.509 [2024-12-05 10:04:13.977059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.978636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.978655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:26.509 [2024-12-05 10:04:13.978661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:32:26.509 [2024-12-05 10:04:13.978667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.978732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.978740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:26.509 [2024-12-05 10:04:13.978746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:32:26.509 [2024-12-05 10:04:13.978752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.978767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.978776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:26.509 [2024-12-05 10:04:13.978782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:26.509 [2024-12-05 10:04:13.978788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.978808] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:26.509 [2024-12-05 10:04:13.978815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.978821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:26.509 [2024-12-05 10:04:13.978827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:26.509 [2024-12-05 10:04:13.978832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.996910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.996935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:26.509 [2024-12-05 10:04:13.996943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.066 ms 00:32:26.509 [2024-12-05 10:04:13.996949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.509 [2024-12-05 10:04:13.996999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.509 [2024-12-05 10:04:13.997006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:26.510 [2024-12-05 10:04:13.997012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:26.510 [2024-12-05 10:04:13.997018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.510 [2024-12-05 10:04:13.997713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.761 ms, result 0 00:32:27.897  [2024-12-05T10:04:16.468Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-05T10:04:17.410Z] Copying: 23/1024 [MB] (10 MBps) [2024-12-05T10:04:18.355Z] Copying: 35/1024 [MB] (11 MBps) [2024-12-05T10:04:19.299Z] Copying: 54/1024 [MB] (19 MBps) [2024-12-05T10:04:20.244Z] Copying: 78/1024 [MB] (23 MBps) [2024-12-05T10:04:21.190Z] Copying: 90/1024 [MB] (12 MBps) [2024-12-05T10:04:22.571Z] Copying: 108/1024 [MB] (17 MBps) [2024-12-05T10:04:23.143Z] Copying: 127/1024 [MB] (19 MBps) [2024-12-05T10:04:24.531Z] Copying: 141/1024 [MB] (14 MBps) [2024-12-05T10:04:25.475Z] Copying: 155/1024 [MB] (13 MBps) [2024-12-05T10:04:26.417Z] Copying: 171/1024 [MB] (16 MBps) [2024-12-05T10:04:27.381Z] Copying: 182/1024 [MB] (10 MBps) [2024-12-05T10:04:28.320Z] Copying: 192/1024 [MB] (10 MBps) [2024-12-05T10:04:29.260Z] Copying: 202/1024 [MB] (10 MBps) [2024-12-05T10:04:30.203Z] Copying: 223/1024 [MB] (20 MBps) [2024-12-05T10:04:31.144Z] Copying: 238/1024 [MB] (15 MBps) [2024-12-05T10:04:32.529Z] Copying: 261/1024 [MB] (22 MBps) [2024-12-05T10:04:33.474Z] Copying: 282/1024 [MB] (20 MBps) [2024-12-05T10:04:34.421Z] Copying: 302/1024 [MB] (19 MBps) [2024-12-05T10:04:35.392Z] Copying: 323/1024 [MB] (21 MBps) [2024-12-05T10:04:36.351Z] Copying: 342/1024 [MB] (19 MBps) [2024-12-05T10:04:37.296Z] Copying: 362/1024 [MB] (19 MBps) [2024-12-05T10:04:38.241Z] Copying: 383/1024 [MB] (20 MBps) [2024-12-05T10:04:39.186Z] Copying: 407/1024 [MB] (24 MBps) [2024-12-05T10:04:40.575Z] Copying: 428/1024 [MB] (21 MBps) [2024-12-05T10:04:41.147Z] Copying: 441/1024 [MB] (12 MBps) [2024-12-05T10:04:42.535Z] Copying: 469/1024 [MB] (28 MBps) [2024-12-05T10:04:43.480Z] Copying: 491/1024 [MB] (21 MBps) [2024-12-05T10:04:44.425Z] Copying: 521/1024 [MB] (30 MBps) [2024-12-05T10:04:45.370Z] Copying: 547/1024 [MB] (25 MBps) [2024-12-05T10:04:46.308Z] Copying: 573/1024 [MB] (26 MBps) [2024-12-05T10:04:47.253Z] Copying: 594/1024 [MB] (20 MBps) [2024-12-05T10:04:48.198Z] Copying: 615/1024 [MB] (20 MBps) [2024-12-05T10:04:49.139Z] Copying: 635/1024 [MB] (20 MBps) [2024-12-05T10:04:50.521Z] Copying: 655/1024 [MB] (19 MBps) [2024-12-05T10:04:51.453Z] Copying: 666/1024 [MB] (11 MBps) [2024-12-05T10:04:52.386Z] Copying: 678/1024 [MB] (11 MBps) [2024-12-05T10:04:53.321Z] Copying: 690/1024 [MB] (12 MBps) [2024-12-05T10:04:54.255Z] Copying: 702/1024 [MB] (12 MBps) [2024-12-05T10:04:55.194Z] Copying: 714/1024 [MB] (12 MBps) [2024-12-05T10:04:56.574Z] Copying: 726/1024 [MB] (12 MBps) [2024-12-05T10:04:57.146Z] Copying: 738/1024 [MB] (11 MBps) [2024-12-05T10:04:58.521Z] Copying: 749/1024 [MB] (10 MBps) [2024-12-05T10:04:59.458Z] Copying: 760/1024 [MB] (11 MBps) [2024-12-05T10:05:00.398Z] Copying: 772/1024 [MB] (11 MBps) [2024-12-05T10:05:01.336Z] Copying: 782/1024 [MB] (10 MBps) [2024-12-05T10:05:02.275Z] Copying: 793/1024 [MB] (10 MBps) [2024-12-05T10:05:03.217Z] Copying: 803/1024 [MB] (10 MBps) [2024-12-05T10:05:04.161Z] Copying: 815/1024 [MB] (11 MBps) [2024-12-05T10:05:05.547Z] Copying: 826/1024 [MB] (11 MBps) [2024-12-05T10:05:06.486Z] Copying: 837/1024 [MB] (10 MBps) [2024-12-05T10:05:07.443Z] Copying: 852/1024 [MB] (15 MBps) [2024-12-05T10:05:08.386Z] Copying: 872/1024 [MB] (19 MBps) [2024-12-05T10:05:09.330Z] Copying: 882/1024 [MB] (10 MBps) [2024-12-05T10:05:10.394Z] Copying: 898/1024 [MB] (16 MBps) [2024-12-05T10:05:11.337Z] Copying: 909/1024 [MB] (10 MBps) [2024-12-05T10:05:12.280Z] Copying: 920/1024 [MB] (10 MBps) [2024-12-05T10:05:13.225Z] Copying: 933/1024 [MB] (13 MBps) [2024-12-05T10:05:14.173Z] Copying: 955/1024 [MB] (21 MBps) [2024-12-05T10:05:15.560Z] Copying: 976/1024 [MB] (21 MBps) [2024-12-05T10:05:16.501Z] Copying: 997/1024 [MB] (20 MBps) [2024-12-05T10:05:16.501Z] Copying: 1018/1024 [MB] (21 MBps) [2024-12-05T10:05:17.076Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-05 10:05:16.814625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.447 [2024-12-05 10:05:16.814769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:29.447 [2024-12-05 10:05:16.814810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:29.447 [2024-12-05 10:05:16.814834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.448 [2024-12-05 10:05:16.814933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:29.448 [2024-12-05 10:05:16.819279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.448 [2024-12-05 10:05:16.819329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:29.448 [2024-12-05 10:05:16.819343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.302 ms 00:33:29.448 [2024-12-05 10:05:16.819354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.448 [2024-12-05 10:05:16.819672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.448 [2024-12-05 10:05:16.819687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:29.448 [2024-12-05 10:05:16.819700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:33:29.448 [2024-12-05 10:05:16.819711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.448 [2024-12-05 10:05:16.819751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.448 [2024-12-05 10:05:16.819763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:29.448 [2024-12-05 10:05:16.819775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:29.448 [2024-12-05 10:05:16.819785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.448 [2024-12-05 10:05:16.819854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.448 [2024-12-05 10:05:16.819866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:29.448 [2024-12-05 10:05:16.819877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:29.448 [2024-12-05 10:05:16.819887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.448 [2024-12-05 10:05:16.819906] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:29.448 [2024-12-05 10:05:16.819922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.819938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.819947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.819958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.819968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.819979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.819989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:29.448 [2024-12-05 10:05:16.820726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.820989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:29.449 [2024-12-05 10:05:16.821009] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:29.449 [2024-12-05 10:05:16.821020] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d7f734e-16bb-40f2-894f-12462e7ca1e0 00:33:29.449 [2024-12-05 10:05:16.821030] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:29.449 [2024-12-05 10:05:16.821040] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:29.449 [2024-12-05 10:05:16.821049] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:29.449 [2024-12-05 10:05:16.821059] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:29.449 [2024-12-05 10:05:16.821067] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:29.449 [2024-12-05 10:05:16.821078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:29.449 [2024-12-05 10:05:16.821095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:29.449 [2024-12-05 10:05:16.821104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:29.449 [2024-12-05 10:05:16.821112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:29.449 [2024-12-05 10:05:16.821121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.449 [2024-12-05 10:05:16.821132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:29.449 [2024-12-05 10:05:16.821143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:33:29.449 [2024-12-05 10:05:16.821172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:16.836150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.449 [2024-12-05 10:05:16.836202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:29.449 [2024-12-05 10:05:16.836215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.957 ms 00:33:29.449 [2024-12-05 10:05:16.836223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:16.836640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.449 [2024-12-05 10:05:16.836661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:29.449 [2024-12-05 10:05:16.836678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:33:29.449 [2024-12-05 10:05:16.836686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:16.873409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:16.873457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:29.449 [2024-12-05 10:05:16.873469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:16.873479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:16.873571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:16.873583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:29.449 [2024-12-05 10:05:16.873598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:16.873608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:16.873672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:16.873683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:29.449 [2024-12-05 10:05:16.873692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:16.873702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:16.873719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:16.873729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:29.449 [2024-12-05 10:05:16.873739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:16.873751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:16.958430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:16.958495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:29.449 [2024-12-05 10:05:16.958531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:16.958541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:17.027736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:17.027791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:29.449 [2024-12-05 10:05:17.027803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:17.027819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:17.027909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:17.027920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:29.449 [2024-12-05 10:05:17.027929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:17.027938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:17.027980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:17.027990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:29.449 [2024-12-05 10:05:17.027998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:17.028007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:17.028089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:17.028099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:29.449 [2024-12-05 10:05:17.028108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:17.028116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:17.028144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:17.028153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:29.449 [2024-12-05 10:05:17.028161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:17.028169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:17.028229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:17.028239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:29.449 [2024-12-05 10:05:17.028248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:17.028256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:17.028303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.449 [2024-12-05 10:05:17.028313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:29.449 [2024-12-05 10:05:17.028321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.449 [2024-12-05 10:05:17.028330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.449 [2024-12-05 10:05:17.028470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 213.909 ms, result 0 00:33:30.389 00:33:30.389 00:33:30.389 10:05:17 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:32.934 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:32.934 10:05:20 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:32.934 [2024-12-05 10:05:20.201505] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:33:32.934 [2024-12-05 10:05:20.201665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85450 ] 00:33:32.934 [2024-12-05 10:05:20.362104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.934 [2024-12-05 10:05:20.486915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.195 [2024-12-05 10:05:20.786232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:33.195 [2024-12-05 10:05:20.786331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:33.457 [2024-12-05 10:05:20.948249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.457 [2024-12-05 10:05:20.948322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:33.457 [2024-12-05 10:05:20.948338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:33.457 [2024-12-05 10:05:20.948347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.457 [2024-12-05 10:05:20.948405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.457 [2024-12-05 10:05:20.948418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:33.457 [2024-12-05 10:05:20.948427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:33.457 [2024-12-05 10:05:20.948435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.948457] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:33.458 [2024-12-05 10:05:20.949186] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:33.458 [2024-12-05 10:05:20.949234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.949243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:33.458 [2024-12-05 10:05:20.949252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:33:33.458 [2024-12-05 10:05:20.949260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.949912] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:33.458 [2024-12-05 10:05:20.949982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.950000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:33.458 [2024-12-05 10:05:20.950012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:33:33.458 [2024-12-05 10:05:20.950020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.950124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.950136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:33.458 [2024-12-05 10:05:20.950146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:33.458 [2024-12-05 10:05:20.950154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.950459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.950472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:33.458 [2024-12-05 10:05:20.950481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:33:33.458 [2024-12-05 10:05:20.950489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.950592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.950604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:33.458 [2024-12-05 10:05:20.950613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:33:33.458 [2024-12-05 10:05:20.950621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.950646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.950655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:33.458 [2024-12-05 10:05:20.950667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:33.458 [2024-12-05 10:05:20.950674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.950697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:33.458 [2024-12-05 10:05:20.955110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.955152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:33.458 [2024-12-05 10:05:20.955163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.419 ms 00:33:33.458 [2024-12-05 10:05:20.955171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.955212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.955221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:33.458 [2024-12-05 10:05:20.955230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:33.458 [2024-12-05 10:05:20.955237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.955302] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:33.458 [2024-12-05 10:05:20.955327] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:33.458 [2024-12-05 10:05:20.955367] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:33.458 [2024-12-05 10:05:20.955382] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:33.458 [2024-12-05 10:05:20.955490] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:33.458 [2024-12-05 10:05:20.955501] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:33.458 [2024-12-05 10:05:20.955526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:33.458 [2024-12-05 10:05:20.955538] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:33.458 [2024-12-05 10:05:20.955547] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:33.458 [2024-12-05 10:05:20.955558] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:33.458 [2024-12-05 10:05:20.955566] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:33.458 [2024-12-05 10:05:20.955574] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:33.458 [2024-12-05 10:05:20.955582] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:33.458 [2024-12-05 10:05:20.955590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.955598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:33.458 [2024-12-05 10:05:20.955606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:33:33.458 [2024-12-05 10:05:20.955614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.955701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.458 [2024-12-05 10:05:20.955711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:33.458 [2024-12-05 10:05:20.955718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:33.458 [2024-12-05 10:05:20.955729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.458 [2024-12-05 10:05:20.955832] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:33.458 [2024-12-05 10:05:20.955844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:33.458 [2024-12-05 10:05:20.955852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:33.458 [2024-12-05 10:05:20.955860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.458 [2024-12-05 10:05:20.955868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:33.458 [2024-12-05 10:05:20.955875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:33.458 [2024-12-05 10:05:20.955882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:33.458 [2024-12-05 10:05:20.955891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:33.458 [2024-12-05 10:05:20.955899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:33.458 [2024-12-05 10:05:20.955905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:33.458 [2024-12-05 10:05:20.955913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:33.458 [2024-12-05 10:05:20.955923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:33.458 [2024-12-05 10:05:20.955930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:33.458 [2024-12-05 10:05:20.955937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:33.458 [2024-12-05 10:05:20.955944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:33.458 [2024-12-05 10:05:20.955957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.458 [2024-12-05 10:05:20.955964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:33.458 [2024-12-05 10:05:20.955972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:33.458 [2024-12-05 10:05:20.955979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.458 [2024-12-05 10:05:20.955985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:33.458 [2024-12-05 10:05:20.955992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:33.458 [2024-12-05 10:05:20.955999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.458 [2024-12-05 10:05:20.956006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:33.458 [2024-12-05 10:05:20.956013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:33.458 [2024-12-05 10:05:20.956019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.458 [2024-12-05 10:05:20.956026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:33.458 [2024-12-05 10:05:20.956034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:33.458 [2024-12-05 10:05:20.956040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.458 [2024-12-05 10:05:20.956047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:33.458 [2024-12-05 10:05:20.956055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:33.458 [2024-12-05 10:05:20.956061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.458 [2024-12-05 10:05:20.956068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:33.458 [2024-12-05 10:05:20.956075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:33.458 [2024-12-05 10:05:20.956082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:33.458 [2024-12-05 10:05:20.956088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:33.458 [2024-12-05 10:05:20.956095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:33.458 [2024-12-05 10:05:20.956101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:33.458 [2024-12-05 10:05:20.956109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:33.458 [2024-12-05 10:05:20.956116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:33.458 [2024-12-05 10:05:20.956122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.458 [2024-12-05 10:05:20.956129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:33.458 [2024-12-05 10:05:20.956136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:33.458 [2024-12-05 10:05:20.956144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.458 [2024-12-05 10:05:20.956152] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:33.458 [2024-12-05 10:05:20.956161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:33.458 [2024-12-05 10:05:20.956168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:33.458 [2024-12-05 10:05:20.956194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.458 [2024-12-05 10:05:20.956205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:33.459 [2024-12-05 10:05:20.956213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:33.459 [2024-12-05 10:05:20.956219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:33.459 [2024-12-05 10:05:20.956226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:33.459 [2024-12-05 10:05:20.956233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:33.459 [2024-12-05 10:05:20.956240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:33.459 [2024-12-05 10:05:20.956249] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:33.459 [2024-12-05 10:05:20.956258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:33.459 [2024-12-05 10:05:20.956267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:33.459 [2024-12-05 10:05:20.956275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:33.459 [2024-12-05 10:05:20.956283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:33.459 [2024-12-05 10:05:20.956290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:33.459 [2024-12-05 10:05:20.956298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:33.459 [2024-12-05 10:05:20.956305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:33.459 [2024-12-05 10:05:20.956312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:33.459 [2024-12-05 10:05:20.956319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:33.459 [2024-12-05 10:05:20.956327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:33.459 [2024-12-05 10:05:20.956334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:33.459 [2024-12-05 10:05:20.956342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:33.459 [2024-12-05 10:05:20.956349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:33.459 [2024-12-05 10:05:20.956356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:33.459 [2024-12-05 10:05:20.956364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:33.459 [2024-12-05 10:05:20.956371] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:33.459 [2024-12-05 10:05:20.956379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:33.459 [2024-12-05 10:05:20.956389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:33.459 [2024-12-05 10:05:20.956396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:33.459 [2024-12-05 10:05:20.956403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:33.459 [2024-12-05 10:05:20.956410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:33.459 [2024-12-05 10:05:20.956418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:20.956427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:33.459 [2024-12-05 10:05:20.956435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:33:33.459 [2024-12-05 10:05:20.956442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:20.984772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:20.984968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:33.459 [2024-12-05 10:05:20.985165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.286 ms 00:33:33.459 [2024-12-05 10:05:20.985210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:20.985318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:20.985496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:33.459 [2024-12-05 10:05:20.985603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:33.459 [2024-12-05 10:05:20.985629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.034392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.034640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:33.459 [2024-12-05 10:05:21.034857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.683 ms 00:33:33.459 [2024-12-05 10:05:21.034911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.034977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.035002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:33.459 [2024-12-05 10:05:21.035023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:33.459 [2024-12-05 10:05:21.035043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.035175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.035219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:33.459 [2024-12-05 10:05:21.035242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:33.459 [2024-12-05 10:05:21.035328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.035490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.035589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:33.459 [2024-12-05 10:05:21.035614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:33:33.459 [2024-12-05 10:05:21.035635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.051702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.051869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:33.459 [2024-12-05 10:05:21.051928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.032 ms 00:33:33.459 [2024-12-05 10:05:21.051951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.052128] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:33.459 [2024-12-05 10:05:21.052240] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:33.459 [2024-12-05 10:05:21.052280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.052300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:33.459 [2024-12-05 10:05:21.052320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:33:33.459 [2024-12-05 10:05:21.052409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.064727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.064880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:33.459 [2024-12-05 10:05:21.064940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.277 ms 00:33:33.459 [2024-12-05 10:05:21.064963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.065111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.065134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:33.459 [2024-12-05 10:05:21.065161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:33:33.459 [2024-12-05 10:05:21.065223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.065295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.065320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:33.459 [2024-12-05 10:05:21.065350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:33.459 [2024-12-05 10:05:21.065370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.065987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.066041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:33.459 [2024-12-05 10:05:21.066065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:33:33.459 [2024-12-05 10:05:21.066089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.066120] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:33.459 [2024-12-05 10:05:21.066218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.066273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:33.459 [2024-12-05 10:05:21.066297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:33:33.459 [2024-12-05 10:05:21.066336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.079059] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:33.459 [2024-12-05 10:05:21.079357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.079374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:33.459 [2024-12-05 10:05:21.079386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.980 ms 00:33:33.459 [2024-12-05 10:05:21.079394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.081650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.081690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:33.459 [2024-12-05 10:05:21.081700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:33:33.459 [2024-12-05 10:05:21.081708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.081806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.081818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:33.459 [2024-12-05 10:05:21.081828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:33.459 [2024-12-05 10:05:21.081837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.459 [2024-12-05 10:05:21.081870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.459 [2024-12-05 10:05:21.081878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:33.459 [2024-12-05 10:05:21.081887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:33.460 [2024-12-05 10:05:21.081894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.460 [2024-12-05 10:05:21.081926] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:33.460 [2024-12-05 10:05:21.081936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.460 [2024-12-05 10:05:21.081944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:33.460 [2024-12-05 10:05:21.081952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:33.460 [2024-12-05 10:05:21.081959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.719 [2024-12-05 10:05:21.108859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.719 [2024-12-05 10:05:21.109055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:33.719 [2024-12-05 10:05:21.109079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.877 ms 00:33:33.719 [2024-12-05 10:05:21.109089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.720 [2024-12-05 10:05:21.109168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.720 [2024-12-05 10:05:21.109178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:33.720 [2024-12-05 10:05:21.109187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:33.720 [2024-12-05 10:05:21.109194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.720 [2024-12-05 10:05:21.110569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.820 ms, result 0 00:33:34.660  [2024-12-05T10:05:23.231Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-05T10:05:24.172Z] Copying: 30/1024 [MB] (17 MBps) [2024-12-05T10:05:25.552Z] Copying: 47/1024 [MB] (16 MBps) [2024-12-05T10:05:26.493Z] Copying: 61/1024 [MB] (14 MBps) [2024-12-05T10:05:27.434Z] Copying: 78/1024 [MB] (17 MBps) [2024-12-05T10:05:28.375Z] Copying: 100/1024 [MB] (22 MBps) [2024-12-05T10:05:29.316Z] Copying: 122/1024 [MB] (22 MBps) [2024-12-05T10:05:30.257Z] Copying: 142/1024 [MB] (19 MBps) [2024-12-05T10:05:31.201Z] Copying: 160/1024 [MB] (18 MBps) [2024-12-05T10:05:32.141Z] Copying: 177/1024 [MB] (16 MBps) [2024-12-05T10:05:33.522Z] Copying: 193/1024 [MB] (15 MBps) [2024-12-05T10:05:34.462Z] Copying: 212/1024 [MB] (19 MBps) [2024-12-05T10:05:35.403Z] Copying: 225/1024 [MB] (12 MBps) [2024-12-05T10:05:36.340Z] Copying: 243/1024 [MB] (17 MBps) [2024-12-05T10:05:37.275Z] Copying: 264/1024 [MB] (21 MBps) [2024-12-05T10:05:38.213Z] Copying: 282/1024 [MB] (17 MBps) [2024-12-05T10:05:39.151Z] Copying: 293/1024 [MB] (11 MBps) [2024-12-05T10:05:40.532Z] Copying: 303/1024 [MB] (10 MBps) [2024-12-05T10:05:41.158Z] Copying: 323/1024 [MB] (19 MBps) [2024-12-05T10:05:42.545Z] Copying: 337/1024 [MB] (14 MBps) [2024-12-05T10:05:43.491Z] Copying: 354/1024 [MB] (16 MBps) [2024-12-05T10:05:44.434Z] Copying: 371/1024 [MB] (16 MBps) [2024-12-05T10:05:45.375Z] Copying: 389/1024 [MB] (18 MBps) [2024-12-05T10:05:46.317Z] Copying: 400/1024 [MB] (10 MBps) [2024-12-05T10:05:47.289Z] Copying: 419/1024 [MB] (19 MBps) [2024-12-05T10:05:48.235Z] Copying: 432/1024 [MB] (13 MBps) [2024-12-05T10:05:49.184Z] Copying: 447/1024 [MB] (14 MBps) [2024-12-05T10:05:50.129Z] Copying: 462/1024 [MB] (15 MBps) [2024-12-05T10:05:51.519Z] Copying: 472/1024 [MB] (10 MBps) [2024-12-05T10:05:52.465Z] Copying: 492/1024 [MB] (19 MBps) [2024-12-05T10:05:53.410Z] Copying: 509/1024 [MB] (16 MBps) [2024-12-05T10:05:54.355Z] Copying: 524/1024 [MB] (14 MBps) [2024-12-05T10:05:55.298Z] Copying: 536/1024 [MB] (12 MBps) [2024-12-05T10:05:56.241Z] Copying: 573/1024 [MB] (37 MBps) [2024-12-05T10:05:57.187Z] Copying: 587/1024 [MB] (13 MBps) [2024-12-05T10:05:58.131Z] Copying: 597/1024 [MB] (10 MBps) [2024-12-05T10:05:59.520Z] Copying: 607/1024 [MB] (10 MBps) [2024-12-05T10:06:00.465Z] Copying: 618/1024 [MB] (10 MBps) [2024-12-05T10:06:01.406Z] Copying: 632/1024 [MB] (14 MBps) [2024-12-05T10:06:02.343Z] Copying: 646/1024 [MB] (14 MBps) [2024-12-05T10:06:03.305Z] Copying: 661/1024 [MB] (14 MBps) [2024-12-05T10:06:04.248Z] Copying: 671/1024 [MB] (10 MBps) [2024-12-05T10:06:05.224Z] Copying: 682/1024 [MB] (10 MBps) [2024-12-05T10:06:06.166Z] Copying: 695/1024 [MB] (12 MBps) [2024-12-05T10:06:07.553Z] Copying: 708/1024 [MB] (13 MBps) [2024-12-05T10:06:08.126Z] Copying: 725/1024 [MB] (16 MBps) [2024-12-05T10:06:09.517Z] Copying: 742/1024 [MB] (17 MBps) [2024-12-05T10:06:10.464Z] Copying: 754/1024 [MB] (12 MBps) [2024-12-05T10:06:11.411Z] Copying: 782936/1048576 [kB] (9992 kBps) [2024-12-05T10:06:12.352Z] Copying: 793096/1048576 [kB] (10160 kBps) [2024-12-05T10:06:13.312Z] Copying: 784/1024 [MB] (10 MBps) [2024-12-05T10:06:14.287Z] Copying: 796/1024 [MB] (11 MBps) [2024-12-05T10:06:15.222Z] Copying: 807/1024 [MB] (11 MBps) [2024-12-05T10:06:16.156Z] Copying: 819/1024 [MB] (11 MBps) [2024-12-05T10:06:17.542Z] Copying: 830/1024 [MB] (11 MBps) [2024-12-05T10:06:18.479Z] Copying: 860872/1048576 [kB] (10204 kBps) [2024-12-05T10:06:19.424Z] Copying: 852/1024 [MB] (11 MBps) [2024-12-05T10:06:20.382Z] Copying: 863/1024 [MB] (11 MBps) [2024-12-05T10:06:21.324Z] Copying: 873/1024 [MB] (10 MBps) [2024-12-05T10:06:22.265Z] Copying: 896/1024 [MB] (22 MBps) [2024-12-05T10:06:23.207Z] Copying: 910/1024 [MB] (13 MBps) [2024-12-05T10:06:24.150Z] Copying: 922/1024 [MB] (12 MBps) [2024-12-05T10:06:25.535Z] Copying: 940/1024 [MB] (18 MBps) [2024-12-05T10:06:26.478Z] Copying: 989/1024 [MB] (49 MBps) [2024-12-05T10:06:27.423Z] Copying: 1006/1024 [MB] (16 MBps) [2024-12-05T10:06:28.368Z] Copying: 1023/1024 [MB] (16 MBps) [2024-12-05T10:06:28.368Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-05 10:06:28.112785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.739 [2024-12-05 10:06:28.112884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:40.739 [2024-12-05 10:06:28.112904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:40.739 [2024-12-05 10:06:28.112914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.739 [2024-12-05 10:06:28.115102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:40.739 [2024-12-05 10:06:28.120697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.739 [2024-12-05 10:06:28.120750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:40.739 [2024-12-05 10:06:28.120763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.386 ms 00:34:40.739 [2024-12-05 10:06:28.120782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.739 [2024-12-05 10:06:28.131741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.739 [2024-12-05 10:06:28.131795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:40.739 [2024-12-05 10:06:28.131808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.038 ms 00:34:40.739 [2024-12-05 10:06:28.131817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.739 [2024-12-05 10:06:28.131849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.739 [2024-12-05 10:06:28.131859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:40.739 [2024-12-05 10:06:28.131868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:40.739 [2024-12-05 10:06:28.131877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.739 [2024-12-05 10:06:28.131946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.739 [2024-12-05 10:06:28.131956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:40.739 [2024-12-05 10:06:28.131965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:34:40.739 [2024-12-05 10:06:28.131975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.739 [2024-12-05 10:06:28.131990] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:40.739 [2024-12-05 10:06:28.132002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126976 / 261120 wr_cnt: 1 state: open 00:34:40.739 [2024-12-05 10:06:28.132013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:40.739 [2024-12-05 10:06:28.132237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:40.740 [2024-12-05 10:06:28.132881] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:40.740 [2024-12-05 10:06:28.132889] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d7f734e-16bb-40f2-894f-12462e7ca1e0 00:34:40.740 [2024-12-05 10:06:28.132897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126976 00:34:40.740 [2024-12-05 10:06:28.132904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127008 00:34:40.740 [2024-12-05 10:06:28.132914] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126976 00:34:40.740 [2024-12-05 10:06:28.132926] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:40.740 [2024-12-05 10:06:28.132933] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:40.740 [2024-12-05 10:06:28.132941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:40.740 [2024-12-05 10:06:28.132949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:40.740 [2024-12-05 10:06:28.132956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:40.740 [2024-12-05 10:06:28.132962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:40.740 [2024-12-05 10:06:28.132971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.740 [2024-12-05 10:06:28.132980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:40.740 [2024-12-05 10:06:28.132988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:34:40.740 [2024-12-05 10:06:28.132996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.740 [2024-12-05 10:06:28.146997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.740 [2024-12-05 10:06:28.147053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:40.740 [2024-12-05 10:06:28.147066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.978 ms 00:34:40.740 [2024-12-05 10:06:28.147075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.740 [2024-12-05 10:06:28.147466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.740 [2024-12-05 10:06:28.147489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:40.741 [2024-12-05 10:06:28.147499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:34:40.741 [2024-12-05 10:06:28.147524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.184146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.184196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:40.741 [2024-12-05 10:06:28.184220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.184230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.184300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.184310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:40.741 [2024-12-05 10:06:28.184320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.184329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.184405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.184426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:40.741 [2024-12-05 10:06:28.184436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.184445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.184464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.184474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:40.741 [2024-12-05 10:06:28.184483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.184493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.269431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.269490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:40.741 [2024-12-05 10:06:28.269504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.269530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.339231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.339292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:40.741 [2024-12-05 10:06:28.339305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.339315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.339384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.339395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:40.741 [2024-12-05 10:06:28.339411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.339420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.339479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.339491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:40.741 [2024-12-05 10:06:28.339499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.339542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.339630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.339640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:40.741 [2024-12-05 10:06:28.339651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.339664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.339691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.339700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:40.741 [2024-12-05 10:06:28.339709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.339720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.339759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.339771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:40.741 [2024-12-05 10:06:28.339780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.339791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.339838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:40.741 [2024-12-05 10:06:28.339851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:40.741 [2024-12-05 10:06:28.339859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:40.741 [2024-12-05 10:06:28.339869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.741 [2024-12-05 10:06:28.340009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 228.502 ms, result 0 00:34:42.127 00:34:42.127 00:34:42.127 10:06:29 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:42.387 [2024-12-05 10:06:29.808078] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:42.387 [2024-12-05 10:06:29.808466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86136 ] 00:34:42.387 [2024-12-05 10:06:29.966716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.649 [2024-12-05 10:06:30.095989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.910 [2024-12-05 10:06:30.394057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:42.910 [2024-12-05 10:06:30.394362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:43.173 [2024-12-05 10:06:30.554480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.173 [2024-12-05 10:06:30.554567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:43.173 [2024-12-05 10:06:30.554584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:43.173 [2024-12-05 10:06:30.554593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.173 [2024-12-05 10:06:30.554651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.173 [2024-12-05 10:06:30.554664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:43.173 [2024-12-05 10:06:30.554674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:43.173 [2024-12-05 10:06:30.554682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.173 [2024-12-05 10:06:30.554703] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:43.173 [2024-12-05 10:06:30.555399] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:43.173 [2024-12-05 10:06:30.555420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.173 [2024-12-05 10:06:30.555428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:43.173 [2024-12-05 10:06:30.555439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:34:43.173 [2024-12-05 10:06:30.555449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.173 [2024-12-05 10:06:30.555784] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:43.173 [2024-12-05 10:06:30.555815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.555827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:43.174 [2024-12-05 10:06:30.555839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:34:43.174 [2024-12-05 10:06:30.555849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.555906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.555917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:43.174 [2024-12-05 10:06:30.555926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:43.174 [2024-12-05 10:06:30.555934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.556262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.556277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:43.174 [2024-12-05 10:06:30.556287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:34:43.174 [2024-12-05 10:06:30.556296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.556368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.556380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:43.174 [2024-12-05 10:06:30.556390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:43.174 [2024-12-05 10:06:30.556400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.556424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.556434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:43.174 [2024-12-05 10:06:30.556445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:43.174 [2024-12-05 10:06:30.556453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.556475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:43.174 [2024-12-05 10:06:30.560832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.560877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:43.174 [2024-12-05 10:06:30.560889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.361 ms 00:34:43.174 [2024-12-05 10:06:30.560898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.560940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.560949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:43.174 [2024-12-05 10:06:30.560958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:43.174 [2024-12-05 10:06:30.560966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.561029] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:43.174 [2024-12-05 10:06:30.561054] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:43.174 [2024-12-05 10:06:30.561100] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:43.174 [2024-12-05 10:06:30.561117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:43.174 [2024-12-05 10:06:30.561226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:43.174 [2024-12-05 10:06:30.561238] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:43.174 [2024-12-05 10:06:30.561249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:43.174 [2024-12-05 10:06:30.561260] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561271] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561283] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:43.174 [2024-12-05 10:06:30.561291] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:43.174 [2024-12-05 10:06:30.561299] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:43.174 [2024-12-05 10:06:30.561308] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:43.174 [2024-12-05 10:06:30.561316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.561323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:43.174 [2024-12-05 10:06:30.561331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:34:43.174 [2024-12-05 10:06:30.561338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.561425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.561436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:43.174 [2024-12-05 10:06:30.561444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:43.174 [2024-12-05 10:06:30.561455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.561574] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:43.174 [2024-12-05 10:06:30.561588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:43.174 [2024-12-05 10:06:30.561596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:43.174 [2024-12-05 10:06:30.561621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:43.174 [2024-12-05 10:06:30.561650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:43.174 [2024-12-05 10:06:30.561665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:43.174 [2024-12-05 10:06:30.561676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:43.174 [2024-12-05 10:06:30.561683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:43.174 [2024-12-05 10:06:30.561690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:43.174 [2024-12-05 10:06:30.561697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:43.174 [2024-12-05 10:06:30.561712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:43.174 [2024-12-05 10:06:30.561726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:43.174 [2024-12-05 10:06:30.561746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:43.174 [2024-12-05 10:06:30.561768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:43.174 [2024-12-05 10:06:30.561787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:43.174 [2024-12-05 10:06:30.561807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:43.174 [2024-12-05 10:06:30.561827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:43.174 [2024-12-05 10:06:30.561841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:43.174 [2024-12-05 10:06:30.561847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:43.174 [2024-12-05 10:06:30.561853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:43.174 [2024-12-05 10:06:30.561859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:43.174 [2024-12-05 10:06:30.561865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:43.174 [2024-12-05 10:06:30.561871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:43.174 [2024-12-05 10:06:30.561885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:43.174 [2024-12-05 10:06:30.561893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561901] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:43.174 [2024-12-05 10:06:30.561909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:43.174 [2024-12-05 10:06:30.561916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.174 [2024-12-05 10:06:30.561933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:43.174 [2024-12-05 10:06:30.561940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:43.174 [2024-12-05 10:06:30.561948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:43.174 [2024-12-05 10:06:30.561954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:43.174 [2024-12-05 10:06:30.561961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:43.174 [2024-12-05 10:06:30.561967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:43.174 [2024-12-05 10:06:30.561975] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:43.174 [2024-12-05 10:06:30.561985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:43.174 [2024-12-05 10:06:30.561993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:43.174 [2024-12-05 10:06:30.562002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:43.174 [2024-12-05 10:06:30.562011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:43.174 [2024-12-05 10:06:30.562018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:43.174 [2024-12-05 10:06:30.562025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:43.174 [2024-12-05 10:06:30.562032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:43.174 [2024-12-05 10:06:30.562039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:43.174 [2024-12-05 10:06:30.562046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:43.174 [2024-12-05 10:06:30.562053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:43.174 [2024-12-05 10:06:30.562060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:43.174 [2024-12-05 10:06:30.562066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:43.174 [2024-12-05 10:06:30.562073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:43.174 [2024-12-05 10:06:30.562081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:43.174 [2024-12-05 10:06:30.562089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:43.174 [2024-12-05 10:06:30.562096] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:43.174 [2024-12-05 10:06:30.562103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:43.174 [2024-12-05 10:06:30.562111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:43.174 [2024-12-05 10:06:30.562119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:43.174 [2024-12-05 10:06:30.562126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:43.174 [2024-12-05 10:06:30.562135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:43.174 [2024-12-05 10:06:30.562145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.562153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:43.174 [2024-12-05 10:06:30.562161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:34:43.174 [2024-12-05 10:06:30.562168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.590202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.590247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:43.174 [2024-12-05 10:06:30.590260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.991 ms 00:34:43.174 [2024-12-05 10:06:30.590268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.590358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.590367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:43.174 [2024-12-05 10:06:30.590380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:34:43.174 [2024-12-05 10:06:30.590389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.634737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.634958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:43.174 [2024-12-05 10:06:30.634982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.289 ms 00:34:43.174 [2024-12-05 10:06:30.634991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.635048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.174 [2024-12-05 10:06:30.635058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:43.174 [2024-12-05 10:06:30.635068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:43.174 [2024-12-05 10:06:30.635076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.174 [2024-12-05 10:06:30.635207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.635221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:43.175 [2024-12-05 10:06:30.635230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:34:43.175 [2024-12-05 10:06:30.635238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.635364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.635379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:43.175 [2024-12-05 10:06:30.635388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:34:43.175 [2024-12-05 10:06:30.635396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.651307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.651358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:43.175 [2024-12-05 10:06:30.651370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.891 ms 00:34:43.175 [2024-12-05 10:06:30.651378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.651568] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:43.175 [2024-12-05 10:06:30.651585] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:43.175 [2024-12-05 10:06:30.651599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.651608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:43.175 [2024-12-05 10:06:30.651618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:34:43.175 [2024-12-05 10:06:30.651627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.663904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.663948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:43.175 [2024-12-05 10:06:30.663961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:34:43.175 [2024-12-05 10:06:30.663969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.664099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.664109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:43.175 [2024-12-05 10:06:30.664118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:34:43.175 [2024-12-05 10:06:30.664131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.664186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.664198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:43.175 [2024-12-05 10:06:30.664223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:43.175 [2024-12-05 10:06:30.664240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.664869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.664887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:43.175 [2024-12-05 10:06:30.664897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:34:43.175 [2024-12-05 10:06:30.664904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.664930] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:43.175 [2024-12-05 10:06:30.664942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.664949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:43.175 [2024-12-05 10:06:30.664957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:43.175 [2024-12-05 10:06:30.664965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.677601] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:43.175 [2024-12-05 10:06:30.677925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.677944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:43.175 [2024-12-05 10:06:30.677956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:34:43.175 [2024-12-05 10:06:30.677965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.680291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.680330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:43.175 [2024-12-05 10:06:30.680342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:34:43.175 [2024-12-05 10:06:30.680352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.680438] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:43.175 [2024-12-05 10:06:30.680916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.680940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:43.175 [2024-12-05 10:06:30.680950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:34:43.175 [2024-12-05 10:06:30.680958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.680992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.681003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:43.175 [2024-12-05 10:06:30.681012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:43.175 [2024-12-05 10:06:30.681021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.681056] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:43.175 [2024-12-05 10:06:30.681066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.681074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:43.175 [2024-12-05 10:06:30.681083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:43.175 [2024-12-05 10:06:30.681091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.708232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.708286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:43.175 [2024-12-05 10:06:30.708301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.119 ms 00:34:43.175 [2024-12-05 10:06:30.708310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.708398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.175 [2024-12-05 10:06:30.708409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:43.175 [2024-12-05 10:06:30.708420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:43.175 [2024-12-05 10:06:30.708430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.175 [2024-12-05 10:06:30.709750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.772 ms, result 0 00:34:44.557  [2024-12-05T10:06:33.130Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-05T10:06:34.075Z] Copying: 30/1024 [MB] (10 MBps) [2024-12-05T10:06:35.015Z] Copying: 45/1024 [MB] (15 MBps) [2024-12-05T10:06:35.949Z] Copying: 57/1024 [MB] (11 MBps) [2024-12-05T10:06:37.335Z] Copying: 70/1024 [MB] (13 MBps) [2024-12-05T10:06:38.278Z] Copying: 81/1024 [MB] (11 MBps) [2024-12-05T10:06:39.218Z] Copying: 96/1024 [MB] (14 MBps) [2024-12-05T10:06:40.157Z] Copying: 116/1024 [MB] (20 MBps) [2024-12-05T10:06:41.097Z] Copying: 135/1024 [MB] (18 MBps) [2024-12-05T10:06:42.040Z] Copying: 152/1024 [MB] (16 MBps) [2024-12-05T10:06:42.982Z] Copying: 164/1024 [MB] (11 MBps) [2024-12-05T10:06:43.926Z] Copying: 174/1024 [MB] (10 MBps) [2024-12-05T10:06:44.961Z] Copying: 185/1024 [MB] (11 MBps) [2024-12-05T10:06:45.912Z] Copying: 201/1024 [MB] (16 MBps) [2024-12-05T10:06:47.298Z] Copying: 224/1024 [MB] (22 MBps) [2024-12-05T10:06:48.242Z] Copying: 242/1024 [MB] (18 MBps) [2024-12-05T10:06:49.189Z] Copying: 267/1024 [MB] (25 MBps) [2024-12-05T10:06:50.134Z] Copying: 278/1024 [MB] (10 MBps) [2024-12-05T10:06:51.079Z] Copying: 288/1024 [MB] (10 MBps) [2024-12-05T10:06:52.025Z] Copying: 299/1024 [MB] (10 MBps) [2024-12-05T10:06:52.970Z] Copying: 311/1024 [MB] (11 MBps) [2024-12-05T10:06:53.913Z] Copying: 322/1024 [MB] (11 MBps) [2024-12-05T10:06:55.301Z] Copying: 333/1024 [MB] (10 MBps) [2024-12-05T10:06:56.262Z] Copying: 345/1024 [MB] (12 MBps) [2024-12-05T10:06:57.206Z] Copying: 356/1024 [MB] (10 MBps) [2024-12-05T10:06:58.147Z] Copying: 371/1024 [MB] (14 MBps) [2024-12-05T10:06:59.090Z] Copying: 381/1024 [MB] (10 MBps) [2024-12-05T10:07:00.032Z] Copying: 395/1024 [MB] (13 MBps) [2024-12-05T10:07:00.976Z] Copying: 421/1024 [MB] (25 MBps) [2024-12-05T10:07:01.921Z] Copying: 440/1024 [MB] (19 MBps) [2024-12-05T10:07:03.309Z] Copying: 460/1024 [MB] (20 MBps) [2024-12-05T10:07:04.255Z] Copying: 471/1024 [MB] (11 MBps) [2024-12-05T10:07:05.199Z] Copying: 482/1024 [MB] (10 MBps) [2024-12-05T10:07:06.141Z] Copying: 500/1024 [MB] (17 MBps) [2024-12-05T10:07:07.085Z] Copying: 517/1024 [MB] (16 MBps) [2024-12-05T10:07:08.030Z] Copying: 534/1024 [MB] (17 MBps) [2024-12-05T10:07:08.974Z] Copying: 553/1024 [MB] (18 MBps) [2024-12-05T10:07:09.919Z] Copying: 567/1024 [MB] (14 MBps) [2024-12-05T10:07:11.307Z] Copying: 585/1024 [MB] (17 MBps) [2024-12-05T10:07:12.250Z] Copying: 603/1024 [MB] (18 MBps) [2024-12-05T10:07:13.195Z] Copying: 616/1024 [MB] (12 MBps) [2024-12-05T10:07:14.137Z] Copying: 629/1024 [MB] (12 MBps) [2024-12-05T10:07:15.080Z] Copying: 644/1024 [MB] (15 MBps) [2024-12-05T10:07:16.025Z] Copying: 656/1024 [MB] (11 MBps) [2024-12-05T10:07:17.035Z] Copying: 670/1024 [MB] (13 MBps) [2024-12-05T10:07:17.978Z] Copying: 685/1024 [MB] (15 MBps) [2024-12-05T10:07:18.923Z] Copying: 700/1024 [MB] (14 MBps) [2024-12-05T10:07:20.313Z] Copying: 723/1024 [MB] (23 MBps) [2024-12-05T10:07:21.254Z] Copying: 738/1024 [MB] (14 MBps) [2024-12-05T10:07:22.194Z] Copying: 761/1024 [MB] (23 MBps) [2024-12-05T10:07:23.135Z] Copying: 787/1024 [MB] (25 MBps) [2024-12-05T10:07:24.076Z] Copying: 808/1024 [MB] (21 MBps) [2024-12-05T10:07:25.018Z] Copying: 827/1024 [MB] (18 MBps) [2024-12-05T10:07:26.007Z] Copying: 848/1024 [MB] (21 MBps) [2024-12-05T10:07:26.949Z] Copying: 859/1024 [MB] (10 MBps) [2024-12-05T10:07:28.336Z] Copying: 870/1024 [MB] (10 MBps) [2024-12-05T10:07:29.279Z] Copying: 882/1024 [MB] (12 MBps) [2024-12-05T10:07:30.224Z] Copying: 898/1024 [MB] (15 MBps) [2024-12-05T10:07:31.169Z] Copying: 909/1024 [MB] (11 MBps) [2024-12-05T10:07:32.114Z] Copying: 920/1024 [MB] (10 MBps) [2024-12-05T10:07:33.061Z] Copying: 933/1024 [MB] (13 MBps) [2024-12-05T10:07:34.008Z] Copying: 945/1024 [MB] (11 MBps) [2024-12-05T10:07:34.953Z] Copying: 956/1024 [MB] (10 MBps) [2024-12-05T10:07:36.338Z] Copying: 966/1024 [MB] (10 MBps) [2024-12-05T10:07:36.912Z] Copying: 977/1024 [MB] (10 MBps) [2024-12-05T10:07:38.300Z] Copying: 988/1024 [MB] (10 MBps) [2024-12-05T10:07:39.243Z] Copying: 999/1024 [MB] (11 MBps) [2024-12-05T10:07:39.506Z] Copying: 1012/1024 [MB] (13 MBps) [2024-12-05T10:07:39.506Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-05 10:07:39.447140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.877 [2024-12-05 10:07:39.447187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:51.877 [2024-12-05 10:07:39.447198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:51.877 [2024-12-05 10:07:39.447204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.877 [2024-12-05 10:07:39.447220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:51.877 [2024-12-05 10:07:39.449330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.877 [2024-12-05 10:07:39.449356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:51.877 [2024-12-05 10:07:39.449365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:35:51.877 [2024-12-05 10:07:39.449376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.877 [2024-12-05 10:07:39.449546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.877 [2024-12-05 10:07:39.449555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:51.877 [2024-12-05 10:07:39.449562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:35:51.877 [2024-12-05 10:07:39.449568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.877 [2024-12-05 10:07:39.449589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.877 [2024-12-05 10:07:39.449595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:51.877 [2024-12-05 10:07:39.449602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:51.877 [2024-12-05 10:07:39.449608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.877 [2024-12-05 10:07:39.449644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.877 [2024-12-05 10:07:39.449653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:51.877 [2024-12-05 10:07:39.449658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:35:51.877 [2024-12-05 10:07:39.449664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.877 [2024-12-05 10:07:39.449674] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:51.877 [2024-12-05 10:07:39.449684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:51.877 [2024-12-05 10:07:39.449691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:51.877 [2024-12-05 10:07:39.449762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.449995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:51.878 [2024-12-05 10:07:39.450285] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:51.878 [2024-12-05 10:07:39.450291] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d7f734e-16bb-40f2-894f-12462e7ca1e0 00:35:51.879 [2024-12-05 10:07:39.450297] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:51.879 [2024-12-05 10:07:39.450303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4128 00:35:51.879 [2024-12-05 10:07:39.450310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4096 00:35:51.879 [2024-12-05 10:07:39.450317] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:35:51.879 [2024-12-05 10:07:39.450323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:51.879 [2024-12-05 10:07:39.450329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:51.879 [2024-12-05 10:07:39.450334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:51.879 [2024-12-05 10:07:39.450339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:51.879 [2024-12-05 10:07:39.450344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:51.879 [2024-12-05 10:07:39.450349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.879 [2024-12-05 10:07:39.450355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:51.879 [2024-12-05 10:07:39.450361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:35:51.879 [2024-12-05 10:07:39.450367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.879 [2024-12-05 10:07:39.460516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.879 [2024-12-05 10:07:39.460609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:51.879 [2024-12-05 10:07:39.461258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.130 ms 00:35:51.879 [2024-12-05 10:07:39.461314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.879 [2024-12-05 10:07:39.461610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.879 [2024-12-05 10:07:39.461635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:51.879 [2024-12-05 10:07:39.461696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:35:51.879 [2024-12-05 10:07:39.461713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.879 [2024-12-05 10:07:39.487386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.879 [2024-12-05 10:07:39.487475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:51.879 [2024-12-05 10:07:39.487521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.879 [2024-12-05 10:07:39.487539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.879 [2024-12-05 10:07:39.487593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.879 [2024-12-05 10:07:39.487609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:51.879 [2024-12-05 10:07:39.487625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.879 [2024-12-05 10:07:39.487640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.879 [2024-12-05 10:07:39.487687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.879 [2024-12-05 10:07:39.487748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:51.879 [2024-12-05 10:07:39.487764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.879 [2024-12-05 10:07:39.487778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.879 [2024-12-05 10:07:39.487798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.879 [2024-12-05 10:07:39.487813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:51.879 [2024-12-05 10:07:39.487829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.879 [2024-12-05 10:07:39.487861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.140 [2024-12-05 10:07:39.547161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.140 [2024-12-05 10:07:39.547279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:52.140 [2024-12-05 10:07:39.547292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.140 [2024-12-05 10:07:39.547299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.140 [2024-12-05 10:07:39.595458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.140 [2024-12-05 10:07:39.595486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:52.140 [2024-12-05 10:07:39.595494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.140 [2024-12-05 10:07:39.595500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.140 [2024-12-05 10:07:39.595556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.140 [2024-12-05 10:07:39.595563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:52.140 [2024-12-05 10:07:39.595573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.140 [2024-12-05 10:07:39.595579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.140 [2024-12-05 10:07:39.595604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.140 [2024-12-05 10:07:39.595610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:52.140 [2024-12-05 10:07:39.595617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.140 [2024-12-05 10:07:39.595623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.141 [2024-12-05 10:07:39.595677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.141 [2024-12-05 10:07:39.595685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:52.141 [2024-12-05 10:07:39.595691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.141 [2024-12-05 10:07:39.595698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.141 [2024-12-05 10:07:39.595717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.141 [2024-12-05 10:07:39.595724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:52.141 [2024-12-05 10:07:39.595730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.141 [2024-12-05 10:07:39.595736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.141 [2024-12-05 10:07:39.595762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.141 [2024-12-05 10:07:39.595768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:52.141 [2024-12-05 10:07:39.595774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.141 [2024-12-05 10:07:39.595781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.141 [2024-12-05 10:07:39.595811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.141 [2024-12-05 10:07:39.595818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:52.141 [2024-12-05 10:07:39.595824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.141 [2024-12-05 10:07:39.595830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.141 [2024-12-05 10:07:39.595919] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 148.755 ms, result 0 00:35:52.714 00:35:52.714 00:35:52.714 10:07:40 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:55.266 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 83984 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83984 ']' 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83984 00:35:55.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83984) - No such process 00:35:55.266 Process with pid 83984 is not found 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 83984 is not found' 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:55.266 Remove shared memory files 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_band_md /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_l2p_l1 /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_l2p_l2 /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_l2p_l2_ctx /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_nvc_md /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_p2l_pool /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_sb /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_sb_shm /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_trim_bitmap /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_trim_log /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_trim_md /dev/hugepages/ftl_0d7f734e-16bb-40f2-894f-12462e7ca1e0_vmap 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:55.266 00:35:55.266 real 4m45.329s 00:35:55.266 user 4m33.110s 00:35:55.266 sys 0m11.928s 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.266 10:07:42 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:55.266 ************************************ 00:35:55.266 END TEST ftl_restore_fast 00:35:55.266 ************************************ 00:35:55.266 10:07:42 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:55.266 10:07:42 ftl -- ftl/ftl.sh@14 -- # killprocess 74938 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 74938 ']' 00:35:55.266 Process with pid 74938 is not found 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@958 -- # kill -0 74938 00:35:55.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74938) - No such process 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74938 is not found' 00:35:55.266 10:07:42 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:55.266 10:07:42 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86879 00:35:55.266 10:07:42 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86879 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@835 -- # '[' -z 86879 ']' 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.266 10:07:42 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.266 10:07:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:55.266 [2024-12-05 10:07:42.760047] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:55.266 [2024-12-05 10:07:42.760885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86879 ] 00:35:55.527 [2024-12-05 10:07:42.924470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.527 [2024-12-05 10:07:43.009944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.098 10:07:43 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.098 10:07:43 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:56.098 10:07:43 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:56.359 nvme0n1 00:35:56.359 10:07:43 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:56.359 10:07:43 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:56.359 10:07:43 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:56.620 10:07:44 ftl -- ftl/common.sh@28 -- # stores=af9fc5c5-c8db-4c17-b39a-444db5a80183 00:35:56.620 10:07:44 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:56.620 10:07:44 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af9fc5c5-c8db-4c17-b39a-444db5a80183 00:35:56.882 10:07:44 ftl -- ftl/ftl.sh@23 -- # killprocess 86879 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 86879 ']' 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@958 -- # kill -0 86879 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@959 -- # uname 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86879 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:56.882 killing process with pid 86879 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86879' 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@973 -- # kill 86879 00:35:56.882 10:07:44 ftl -- common/autotest_common.sh@978 -- # wait 86879 00:35:58.266 10:07:45 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:58.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:58.266 Waiting for block devices as requested 00:35:58.266 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:58.266 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:58.527 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:58.527 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:03.914 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:03.914 Remove shared memory files 00:36:03.914 10:07:51 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:03.914 10:07:51 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:03.914 10:07:51 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:03.914 10:07:51 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:03.914 10:07:51 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:03.914 10:07:51 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:03.914 10:07:51 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:03.914 00:36:03.914 real 18m16.524s 00:36:03.914 user 20m40.827s 00:36:03.914 sys 1m21.993s 00:36:03.914 10:07:51 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:03.914 10:07:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:03.914 ************************************ 00:36:03.914 END TEST ftl 00:36:03.914 ************************************ 00:36:03.914 10:07:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:03.914 10:07:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:03.914 10:07:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:03.914 10:07:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:03.914 10:07:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:03.915 10:07:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:03.915 10:07:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:03.915 10:07:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:03.915 10:07:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:03.915 10:07:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:03.915 10:07:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:03.915 10:07:51 -- common/autotest_common.sh@10 -- # set +x 00:36:03.915 10:07:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:03.915 10:07:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:03.915 10:07:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:03.915 10:07:51 -- common/autotest_common.sh@10 -- # set +x 00:36:05.303 INFO: APP EXITING 00:36:05.303 INFO: killing all VMs 00:36:05.303 INFO: killing vhost app 00:36:05.303 INFO: EXIT DONE 00:36:05.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:05.825 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:05.825 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:05.825 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:05.825 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:06.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:06.657 Cleaning 00:36:06.657 Removing: /var/run/dpdk/spdk0/config 00:36:06.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:06.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:06.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:06.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:06.657 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:06.657 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:06.657 Removing: /var/run/dpdk/spdk0 00:36:06.657 Removing: /var/run/dpdk/spdk_pid56942 00:36:06.657 Removing: /var/run/dpdk/spdk_pid57149 00:36:06.657 Removing: /var/run/dpdk/spdk_pid57367 00:36:06.657 Removing: /var/run/dpdk/spdk_pid57460 00:36:06.657 Removing: /var/run/dpdk/spdk_pid57494 00:36:06.657 Removing: /var/run/dpdk/spdk_pid57617 00:36:06.657 Removing: /var/run/dpdk/spdk_pid57629 00:36:06.657 Removing: /var/run/dpdk/spdk_pid57823 00:36:06.657 Removing: /var/run/dpdk/spdk_pid57916 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58012 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58112 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58203 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58243 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58274 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58350 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58445 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58870 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58934 00:36:06.657 Removing: /var/run/dpdk/spdk_pid58986 00:36:06.657 Removing: /var/run/dpdk/spdk_pid59002 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59093 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59109 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59200 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59216 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59275 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59287 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59340 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59358 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59513 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59549 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59633 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59805 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59883 00:36:06.658 Removing: /var/run/dpdk/spdk_pid59920 00:36:06.658 Removing: /var/run/dpdk/spdk_pid60347 00:36:06.658 Removing: /var/run/dpdk/spdk_pid60440 00:36:06.658 Removing: /var/run/dpdk/spdk_pid60551 00:36:06.658 Removing: /var/run/dpdk/spdk_pid60604 00:36:06.658 Removing: /var/run/dpdk/spdk_pid60624 00:36:06.658 Removing: /var/run/dpdk/spdk_pid60708 00:36:06.658 Removing: /var/run/dpdk/spdk_pid61333 00:36:06.658 Removing: /var/run/dpdk/spdk_pid61369 00:36:06.658 Removing: /var/run/dpdk/spdk_pid61837 00:36:06.658 Removing: /var/run/dpdk/spdk_pid61929 00:36:06.658 Removing: /var/run/dpdk/spdk_pid62045 00:36:06.658 Removing: /var/run/dpdk/spdk_pid62098 00:36:06.919 Removing: /var/run/dpdk/spdk_pid62118 00:36:06.919 Removing: /var/run/dpdk/spdk_pid62150 00:36:06.919 Removing: /var/run/dpdk/spdk_pid63985 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64111 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64126 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64138 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64177 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64181 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64193 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64238 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64242 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64254 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64299 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64303 00:36:06.919 Removing: /var/run/dpdk/spdk_pid64315 00:36:06.919 Removing: /var/run/dpdk/spdk_pid65700 00:36:06.919 Removing: /var/run/dpdk/spdk_pid65797 00:36:06.919 Removing: /var/run/dpdk/spdk_pid67198 00:36:06.919 Removing: /var/run/dpdk/spdk_pid68946 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69020 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69096 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69206 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69292 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69388 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69462 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69537 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69641 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69734 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69832 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69900 00:36:06.919 Removing: /var/run/dpdk/spdk_pid69976 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70080 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70166 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70267 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70336 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70411 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70515 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70607 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70697 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70771 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70845 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70922 00:36:06.919 Removing: /var/run/dpdk/spdk_pid70995 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71098 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71189 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71284 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71352 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71432 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71506 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71580 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71689 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71782 00:36:06.919 Removing: /var/run/dpdk/spdk_pid71926 00:36:06.919 Removing: /var/run/dpdk/spdk_pid72210 00:36:06.919 Removing: /var/run/dpdk/spdk_pid72245 00:36:06.919 Removing: /var/run/dpdk/spdk_pid72703 00:36:06.919 Removing: /var/run/dpdk/spdk_pid72888 00:36:06.919 Removing: /var/run/dpdk/spdk_pid72989 00:36:06.919 Removing: /var/run/dpdk/spdk_pid73100 00:36:06.919 Removing: /var/run/dpdk/spdk_pid73144 00:36:06.919 Removing: /var/run/dpdk/spdk_pid73175 00:36:06.919 Removing: /var/run/dpdk/spdk_pid73465 00:36:06.919 Removing: /var/run/dpdk/spdk_pid73520 00:36:06.919 Removing: /var/run/dpdk/spdk_pid73587 00:36:06.919 Removing: /var/run/dpdk/spdk_pid73986 00:36:06.919 Removing: /var/run/dpdk/spdk_pid74133 00:36:06.919 Removing: /var/run/dpdk/spdk_pid74938 00:36:06.919 Removing: /var/run/dpdk/spdk_pid75066 00:36:06.919 Removing: /var/run/dpdk/spdk_pid75225 00:36:06.919 Removing: /var/run/dpdk/spdk_pid75317 00:36:06.919 Removing: /var/run/dpdk/spdk_pid75675 00:36:06.919 Removing: /var/run/dpdk/spdk_pid75962 00:36:06.919 Removing: /var/run/dpdk/spdk_pid76320 00:36:06.919 Removing: /var/run/dpdk/spdk_pid76496 00:36:06.919 Removing: /var/run/dpdk/spdk_pid76643 00:36:06.919 Removing: /var/run/dpdk/spdk_pid76690 00:36:06.919 Removing: /var/run/dpdk/spdk_pid76854 00:36:06.919 Removing: /var/run/dpdk/spdk_pid76875 00:36:06.919 Removing: /var/run/dpdk/spdk_pid76929 00:36:06.919 Removing: /var/run/dpdk/spdk_pid77197 00:36:06.919 Removing: /var/run/dpdk/spdk_pid77429 00:36:06.919 Removing: /var/run/dpdk/spdk_pid78065 00:36:06.919 Removing: /var/run/dpdk/spdk_pid78800 00:36:06.919 Removing: /var/run/dpdk/spdk_pid79409 00:36:06.919 Removing: /var/run/dpdk/spdk_pid80223 00:36:06.919 Removing: /var/run/dpdk/spdk_pid80367 00:36:06.919 Removing: /var/run/dpdk/spdk_pid80444 00:36:06.919 Removing: /var/run/dpdk/spdk_pid80970 00:36:06.919 Removing: /var/run/dpdk/spdk_pid81022 00:36:06.919 Removing: /var/run/dpdk/spdk_pid81600 00:36:06.919 Removing: /var/run/dpdk/spdk_pid82147 00:36:06.919 Removing: /var/run/dpdk/spdk_pid82925 00:36:06.919 Removing: /var/run/dpdk/spdk_pid83047 00:36:06.919 Removing: /var/run/dpdk/spdk_pid83096 00:36:06.919 Removing: /var/run/dpdk/spdk_pid83153 00:36:06.919 Removing: /var/run/dpdk/spdk_pid83210 00:36:06.919 Removing: /var/run/dpdk/spdk_pid83274 00:36:06.919 Removing: /var/run/dpdk/spdk_pid83472 00:36:07.180 Removing: /var/run/dpdk/spdk_pid83552 00:36:07.180 Removing: /var/run/dpdk/spdk_pid83625 00:36:07.180 Removing: /var/run/dpdk/spdk_pid83686 00:36:07.180 Removing: /var/run/dpdk/spdk_pid83727 00:36:07.180 Removing: /var/run/dpdk/spdk_pid83789 00:36:07.180 Removing: /var/run/dpdk/spdk_pid83984 00:36:07.180 Removing: /var/run/dpdk/spdk_pid84208 00:36:07.180 Removing: /var/run/dpdk/spdk_pid84787 00:36:07.180 Removing: /var/run/dpdk/spdk_pid85450 00:36:07.180 Removing: /var/run/dpdk/spdk_pid86136 00:36:07.180 Removing: /var/run/dpdk/spdk_pid86879 00:36:07.180 Clean 00:36:07.180 10:07:54 -- common/autotest_common.sh@1453 -- # return 0 00:36:07.180 10:07:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:07.180 10:07:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.180 10:07:54 -- common/autotest_common.sh@10 -- # set +x 00:36:07.180 10:07:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:07.180 10:07:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.180 10:07:54 -- common/autotest_common.sh@10 -- # set +x 00:36:07.180 10:07:54 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:07.180 10:07:54 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:07.180 10:07:54 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:07.180 10:07:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:07.180 10:07:54 -- spdk/autotest.sh@398 -- # hostname 00:36:07.180 10:07:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:07.441 geninfo: WARNING: invalid characters removed from testname! 00:36:34.029 10:08:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:36.574 10:08:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:38.490 10:08:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:41.038 10:08:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:44.352 10:08:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:46.899 10:08:34 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:49.447 10:08:36 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:49.447 10:08:36 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:49.447 10:08:36 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:49.447 10:08:36 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:49.447 10:08:36 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:49.447 10:08:36 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:49.447 + [[ -n 5030 ]] 00:36:49.447 + sudo kill 5030 00:36:49.459 [Pipeline] } 00:36:49.476 [Pipeline] // timeout 00:36:49.482 [Pipeline] } 00:36:49.498 [Pipeline] // stage 00:36:49.504 [Pipeline] } 00:36:49.520 [Pipeline] // catchError 00:36:49.530 [Pipeline] stage 00:36:49.532 [Pipeline] { (Stop VM) 00:36:49.546 [Pipeline] sh 00:36:49.834 + vagrant halt 00:36:52.381 ==> default: Halting domain... 00:36:58.989 [Pipeline] sh 00:36:59.276 + vagrant destroy -f 00:37:01.931 ==> default: Removing domain... 00:37:02.202 [Pipeline] sh 00:37:02.481 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:02.492 [Pipeline] } 00:37:02.509 [Pipeline] // stage 00:37:02.514 [Pipeline] } 00:37:02.529 [Pipeline] // dir 00:37:02.533 [Pipeline] } 00:37:02.546 [Pipeline] // wrap 00:37:02.551 [Pipeline] } 00:37:02.562 [Pipeline] // catchError 00:37:02.570 [Pipeline] stage 00:37:02.573 [Pipeline] { (Epilogue) 00:37:02.585 [Pipeline] sh 00:37:02.872 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:08.166 [Pipeline] catchError 00:37:08.167 [Pipeline] { 00:37:08.176 [Pipeline] sh 00:37:08.459 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:08.459 Artifacts sizes are good 00:37:08.468 [Pipeline] } 00:37:08.480 [Pipeline] // catchError 00:37:08.490 [Pipeline] archiveArtifacts 00:37:08.496 Archiving artifacts 00:37:08.586 [Pipeline] cleanWs 00:37:08.597 [WS-CLEANUP] Deleting project workspace... 00:37:08.597 [WS-CLEANUP] Deferred wipeout is used... 00:37:08.603 [WS-CLEANUP] done 00:37:08.604 [Pipeline] } 00:37:08.618 [Pipeline] // stage 00:37:08.623 [Pipeline] } 00:37:08.634 [Pipeline] // node 00:37:08.639 [Pipeline] End of Pipeline 00:37:08.679 Finished: SUCCESS